« new experience | RC Drift Fun Race Magelang » |
sedikit dokumentasi dari kebanyakan yg gagal install SYSBENCH di Solaris khususnya pada architecture SPARC
dengan menu dasar:
1. make sure solaris studio sudah siap
# export PATH=$PATH:/opt/solarisstudio/bin
2. extract, build & install m4
# cd m4-1.4.17/ # ./configure --prefix=/opt/app checking for a BSD-compatible install... build-aux/install-sh -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... build-aux/install-sh -c -d .. .. # make # make install
3. update path binary executable
# export PATH=$PATH:/opt/app/bin
4. extract, build & install autoconf
# cd autoconf-2.69/ # ./configure --prefix=/opt/app checking for a BSD-compatible install... build-aux/install-sh -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... build-aux/install-sh -c -d .. .. # make # make install
5. extract, build & install automake
# cd automake-1.14 # ./configure --prefix=/opt/app checking whether make supports nested variables... yes checking build system type... sparc-sun-solaris2.10 checking host system type... sparc-sun-solaris2.10 checking for a BSD-compatible install... lib/install-sh -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... lib/install-sh -c -d .. .. # make # make install
6. extract, build & install sysbench
edit file configure.ac
# cd sysbench-0.4.12 # vi configure.ac
edit menjadi AC_PROG_RANLIB
# Checks for programs.
AC_PROG_CC
AC_PROG_LIBTOOL
Menjadi
# Checks for programs.
AC_PROG_CC
AC_PROG_RANLIB
# ./configure --prefix=/opt/sysbench CFLAGS=-m64 checking build system type... sparc-sun-solaris2.10 checking host system type... sparc-sun-solaris2.10 checking target system type... sparc-sun-solaris2.10 checking for a BSD-compatible install... config/install-sh -c checking whether build environment is sane... yes .. .. # make # make install
mari mem-Benchmark OLTP MySQL Enterprise.
bagaimana hasilnya??
MySQL Cluster @ Solaris 10.
node1 [10.0.5.41]: nDB, Sql, Management
node2 [10.0.5.42]: nDB, Sql
node3 [10.0.5.43]: nDB, Sql
berhubung cuman develop, struktur direcrory config & datadir nyah ditaruh di /apps
# ls /apps
config
ndb_data
mysql_data
# cat /apps/config/config.ini [TCP DEFAULT] [NDB_MGMD DEFAULT] Datadir=/apps/ndb_data/ [NDB_MGMD] NodeId=1 Hostname=10.0.5.41 [NDBD DEFAULT] NoOfReplicas=2 Datadir=/apps/ndb_data/ [NDBD] Hostname=10.0.5.41 [NDBD] Hostname=10.0.5.42 [NDBD] Hostname=10.0.5.43 [MYSQLD] [MYSQLD] [MYSQLD]
# cat /apps/config/my.cnf [MYSQLD] ndbcluster ndb-connectstring=10.0.5.41 datadir=/apps/mysql_data socket=/tmp/mysql.sock user=mysql [MYSQLD_SAFE] log-error=/apps/mysqld.log pid-file=/apps/mysqld.pid [MYSQL_CLUSTER] ndb-connectstring=10.0.5.41
Execute @ node1: # /opt/mysql/mysql/bin/ndb_mgmd -f /apps/config/config.ini –configdir=/apps/config/ –initial
# /opt/mysql/mysql/bin/ndb_mgmd -f /apps/config/config.ini --configdir=/apps/config/ MySQL Cluster Management Server mysql-5.5.30 ndb-7.2.12 bash-3.2# ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 3 node(s) id=2 (not connected, accepting connect from 10.0.5.41) id=3 (not connected, accepting connect from 10.0.5.42) id=4 (not connected, accepting connect from 10.0.5.43) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12) [mysqld(API)] 3 node(s) id=5 (not connected, accepting connect from any host) id=6 (not connected, accepting connect from any host) id=7 (not connected, accepting connect from any host)
exec @ node1: # /opt/mysql/mysql/bin/ndbmtd –defaults-file=/apps/config/my.cnf
# /opt/mysql/mysql/bin/ndbmtd --defaults-file=/apps/config/my.cnf 2013-07-09 23:58:44 [ndbd] INFO -- Angel connected to '10.0.5.41:1186' 2013-07-09 23:58:44 [ndbd] INFO -- Angel allocated nodeid: 2 # ndb_mgm -e show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 3 node(s) id=2 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0) id=3 (not connected, accepting connect from 10.0.5.42) id=4 (not connected, accepting connect from 10.0.5.43) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12) [mysqld(API)] 3 node(s) id=5 (not connected, accepting connect from any host) id=6 (not connected, accepting connect from any host) id=7 (not connected, accepting connect from any host)
execute @ node2 & node3: # /opt/mysql/mysql/bin/ndbmtd –defaults-file=/apps/config/my.cnf
# /opt/mysql/mysql/bin/ndbmtd --defaults-file=/apps/config/my.cnf 2013-07-10 00:01:50 [ndbd] INFO -- Angel connected to '10.0.5.41:1186' 2013-07-10 00:01:50 [ndbd] INFO -- Angel allocated nodeid: 3
cek pada cluster management:
2# ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 3 node(s) id=2 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master) id=3 @10.0.5.42 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 1) id=4 @10.0.5.43 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 2) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12) [mysqld(API)] 3 node(s) id=5 (not connected, accepting connect from any host) id=6 (not connected, accepting connect from any host) id=7 (not connected, accepting connect from any host) indb_mgm>
Execute pada semua Node:
# /opt/mysql/mysql/scripts/mysql_install_db --defaults-file=/apps/config/my.cnf \ --user=mysql --datadir=/apps/mysql_data --basedir=/opt/mysql/mysql # /opt/mysql/mysql/bin/mysqld_safe --defaults-extra-file=/apps/config/my.cnf &
# ndb_mgm -e show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 3 node(s) id=2 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master) id=3 @10.0.5.42 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 1) id=4 @10.0.5.43 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 2) [ndb_mgmd(MGM)] 1 node(s) id=1 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12) [mysqld(API)] 3 node(s) id=5 @10.0.5.41 (mysql-5.5.30 ndb-7.2.12) id=6 @10.0.5.42 (mysql-5.5.30 ndb-7.2.12) id=7 @10.0.5.43 (mysql-5.5.30 ndb-7.2.12)
# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.5.30-ndb-7.2.12-cluster-commercial-advanced MySQL Cluster Server - Advanced Edition (Commercial) Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
tadaaa..
tinggal configure privileges & create database dengan engine ndbcluster
Removing a Node From a Resource Group
how to ngebuang node (monyet3) dari resource group aktif..
# clq show d1 === Quorum Devices === Quorum Device Name: d1 Enabled: yes Votes: 2 Global Name: /dev/did/rdsk/d1s2 Type: shared_disk Access Mode: scsi3 Hosts (enabled): monyet3, monyet1, monyet2 === Cluster Resource Groups === Group Name Node Name Suspended State ---------- --------- --------- ----- MySQL-RG monyet1 No Offline monyet2 No Online monyet3 No Offline # clrg status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- MySQL-RS monyet1 Offline Offline monyet2 Online Online - Service is online. monyet3 Offline Offline MySQL-LH monyet1 Offline Offline - LogicalHostname offline. monyet2 Online Online - LogicalHostname online. monyet3 Offline Offline MySQL-HAS monyet1 Offline Offline monyet2 Online Online monyet3 Offline Offline # scrgadm -pv -g MySQL-RG Res Group name: MySQL-RG (MySQL-RG) Res Group RG_description: <NULL> (MySQL-RG) Res Group mode: Failover (MySQL-RG) Res Group management state: Managed (MySQL-RG) Res Group RG_project_name: default (MySQL-RG) Res Group RG_SLM_type: manual (MySQL-RG) Res Group RG_affinities: <NULL> (MySQL-RG) Res Group Auto_start_on_new_cluster: True (MySQL-RG) Res Group Failback: False (MySQL-RG) Res Group Nodelist: monyet1 monyet2 monyet3 (MySQL-RG) Res Group Maximum_primaries: 1 (MySQL-RG) Res Group Desired_primaries: 1 (MySQL-RG) Res Group RG_dependencies: <NULL> (MySQL-RG) Res Group network dependencies: True (MySQL-RG) Res Group Global_resources_used: <All> (MySQL-RG) Res Group Pingpong_interval: 3600 (MySQL-RG) Res Group Pathprefix: <NULL> (MySQL-RG) Res Group system: False (MySQL-RG) Res Group Suspend_automatic_recovery: False # scrgadm -pv -g MySQL-RG | grep -i nodelist (MySQL-RG) Res Group Nodelist: monyet1 monyet2 monyet3 # scrgadm -c -g MySQL-RG -h monyet1,monyet2 # scrgadm -pv -g MySQL-RG | grep -i nodelist (MySQL-RG) Res Group Nodelist: monyet1 monyet2 # scrgadm -pvv -g MySQL-RG | grep -i netiflist (MySQL-RG:MySQL-LH) Res property name: NetIfList (MySQL-RG:MySQL-LH:NetIfList) Res property class: extension (MySQL-RG:MySQL-LH:NetIfList) Res property description: List of IPMP groups on each node (MySQL-RG:MySQL-LH:NetIfList) Res property pernode: False (MySQL-RG:MySQL-LH:NetIfList) Res property type: stringarray (MySQL-RG:MySQL-LH:NetIfList) Res property value: sc_ipmp0@1 sc_ipmp0@2 sc_ipmp0@3 # scrgadm -c -j MySQL-LH -x netiflist=sc_ipmp0@1,sc_ipmp0@2
dari node yang aktif:
# clnode evacuate monyet3
shutdown monyet3 dan booting non cluster mode
ok boot -x
bersambung….
build MySQL HA enterprise pada Solaris Cluster kandang-monyet & kandang-buaya…
karena Data Service MySQL belum include pada Solaris Cluster.. jadi registrasi manual..
*** Data Services Menu *** Please select from one of the following options: * 1) Apache Web Server * 2) Oracle * 3) NFS * 4) Oracle Real Application Clusters * 5) PeopleSoft Enterprise Application Server * 6) Highly Available Storage * 7) Logical Hostname * 8) Shared Address * 9) Per Node Logical Hostname *10) Weblogic Server * ?) Help * q) Return to the Main Menu Option:
register Generic Data Service
# clresourcetype register SUNW.gds SUNW.HAStoragePlus # clresourcetype show === Registered Resource Types === .... .... Resource Type: SUNW.gds:6 RT_description: Generic Data Service for Oracle Solaris Cluster RT_version: 6 API_version: 2 RT_basedir: /opt/SUNWscgds/bin Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: <NULL> RT_system: False Global_zone: False Resource Type: SUNW.HAStoragePlus:10 RT_description: HA Storage Plus RT_version: 10 API_version: 2 RT_basedir: /usr/cluster/lib/rgm/rt/hastorageplus Single_instance: False Proxy: False Init_nodes: All potential masters Installed_nodes: <All> Failover: False Pkglist: SUNWscu RT_system: False Global_zone: True .....
Create resource group dan logical hostname untuk Failover
# clresourcegroup create MySQL-RG # clresource create -g MySQL-RG -t SUNW.HAStoragePlus -p AffinityOn=TRUE -p Zpools=zMysql -p ZpoolsSearchDir=/dev/did/dsk MySQL-HAS # clreslogicalhostname create -g MySQL-RG -h buaya MySQL-LH # clresource list -v Resource Name Resource Type Resource Group ------------- ------------- -------------- MySQL-LH SUNW.LogicalHostname:4 MySQL-RG MySQL-HAS SUNW.HAStoragePlus:10 MySQL-RG
Register Manual:
setting parameter sesuai konfigurasi & register…
# cp /opt/SUNWscmys/util/mysql_config /export/home/ozzie/mysql_config # cp /opt/SUNWscmys/util/ha_mysql_config /export/home/ozzie/ha_mysql_config
mysql_config:
MYSQL_BASE=/opt/mysql/mysql MYSQL_USER=root MYSQL_PASSWD=baueek MYSQL_HOST=Buaya FMUSER=fmuser FMPASS=fmuser MYSQL_SOCK=/tmp/mysql.sock MYSQL_NIC_HOSTNAME=Buaya MYSQL_DATADIR=/global/mysql
ha_mysql_config:
RS=MySQL-RS RG=MySQL-RG PORT=3306 LH=buaya SCALABLE= LB_POLICY= RS_PROP= HAS_RS=MySQL-HAS BASEDIR=/opt/mysql/mysql DATADIR=/global/mysql MYSQLUSER=mysql MYSQLHOST=buaya FMUSER=fmuser FMPASS=fmuser LOGDIR=/global/mysql/logs CHECK=yes
register Data Service:
# /opt/SUNWscmys/util/mysql_register -f /export/home/ozzie/mysql_config # /opt/SUNWscmys/util/ha_mysql_register -f /export/home/ozzie/ha_mysql_config # clrs enable MySQL-RS
Taddaaaaa
bash-3.2# clrs status === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- MySQL-RS buaya2 Online Online - Service is online. buaya1 Offline Offline MySQL-LH buaya2 Online Online - LogicalHostname online. buaya1 Offline Offline MySQL-HAS buaya2 Online Online buaya1 Offline Offline
tinggal switch antar node dah
# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 84 Server version: 5.6.12-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial) Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
Shutdown semua node.
root@Monyet3:~# scshutdown Broadcast Message from root (???) on Monyet3 Mon Jun 24 02:42:38... The cluster kandang-monyet will be shutdown in 1 minute Broadcast Message from root (???) on Monyet3 Mon Jun 24 02:43:08... The cluster kandang-monyet will be shutdown in 30 seconds Do you want to continue? (y or n): y
Berhubung mesin SPARC tinggal tambah option di OKpromt..
SPARC Enterprise T5220, No Keyboard Copyright (c) 1998, 2012, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.6.b, 4096 MB memory available, Serial #83382360. Ethernet address 0:14:4f:f8:50:58, Host ID: 84f85058. {0} ok boot -x Boot device: /virtual-devices@100/channel-devices@200/disk@0:a File and args: -x SunOS Release 5.11 Version 11.1 64-bit Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved. \
tinggal execute clsetup
*** Main Menu *** Select from one of the following options: 1) Change Network Addressing and Ranges for the Cluster Transport 2) Show Network Addressing and Ranges for the Cluster Transport ?) Help with menu options q) Quit Option:
>>> Change Network Addressing and Ranges for the Cluster Transport <<< Network addressing for the cluster transport is currently configured as follows: === Private Network === private_netaddr: 172.16.0.0 private_netmask: 255.255.240.0 max_nodes: 62 max_privatenets: 10 num_zoneclusters: 12 num_xip_zoneclusters: 3 Do you want to change this configuration (yes/no) [yes]? The default network address for the cluster transport is 172.16.0.0. Do you want to use the default (yes/no) [yes]? no What network address do you want to use? 172.16.202.0 The combination of private netmask and network address will dictate both the maximum number of nodes and private networks that can be supported by a cluster. Given your private network address, this program will generate a range of recommended private netmasks based on the maximum number of nodes and private networks that you anticipate for this cluster. In specifying the anticipated number of maximum nodes and private networks for this cluster, it is important that you give serious consideration to future growth potential. While both the private netmask and network address can be changed later, the tools for making such changes require that all nodes in the cluster be booted into noncluster mode. Maximum number of nodes anticipated for future growth [3]? Maximum number of private networks anticipated for future growth [2]? Specify a netmask of 255.255.254.0 to meet anticipated future requirements of 3 cluster nodes and 2 private networks. To accommodate more growth, specify a netmask of 255.255.254.0 to support up to 6 cluster nodes and 4 private networks. What netmask do you want to use [255.255.254.0]? Is it okay to proceed with the update (yes/no) [yes]? /usr/cluster/bin/cluster set-netprops -p private_netaddr=172.16.202.0 -p private_netmask=255.255.254.0 -p max_nodes=3 -p max_privatenets=2 Attempting to contact node "Monyet3" ...done Attempting to contact node "Monyet2" ...done Command completed successfully.
tinggal reboot dan taadddaaaaa..
root@Monyet1:~# clinterconnect show === Transport Cables === Transport Cable: Monyet3:net0,switch1@1 Endpoint1: Monyet3:net0 Endpoint2: switch1@1 State: Enabled Transport Cable: Monyet3:net2,switch2@1 Endpoint1: Monyet3:net2 Endpoint2: switch2@1 State: Enabled Transport Cable: Monyet2:net0,switch1@2 Endpoint1: Monyet2:net0 Endpoint2: switch1@2 State: Enabled Transport Cable: Monyet2:net2,switch2@2 Endpoint1: Monyet2:net2 Endpoint2: switch2@2 State: Enabled Transport Cable: Monyet1:net0,switch1@3 Endpoint1: Monyet1:net0 Endpoint2: switch1@3 State: Enabled Transport Cable: Monyet1:net2,switch2@3 Endpoint1: Monyet1:net2 Endpoint2: switch2@3 State: Enabled === Transport Switches === Transport Switch: switch1 State: Enabled Type: switch Port Names: 1 2 3 Port State(1): Enabled Port State(2): Enabled Port State(3): Enabled Transport Switch: switch2 State: Enabled Type: switch Port Names: 1 2 3 Port State(1): Enabled Port State(2): Enabled Port State(3): Enabled --- Transport Adapters for Monyet3 --- Transport Adapter: net0 State: Enabled Transport Type: dlpi device_name: net device_instance: 0 lazy_free: 1 dlpi_heartbeat_timeout: 10000 dlpi_heartbeat_quantum: 1000 nw_bandwidth: 80 bandwidth: 70 ip_address: 172.16.202.17 netmask: 255.255.255.248 Port Names: 0 Port State(0): Enabled Transport Adapter: net2 State: Enabled Transport Type: dlpi device_name: net device_instance: 2 lazy_free: 1 dlpi_heartbeat_timeout: 10000 dlpi_heartbeat_quantum: 1000 nw_bandwidth: 80 bandwidth: 70 ip_address: 172.16.202.9 netmask: 255.255.255.248 Port Names: 0 Port State(0): Enabled --- Transport Adapters for Monyet2 --- Transport Adapter: net0 State: Enabled Transport Type: dlpi device_name: net device_instance: 0 lazy_free: 1 dlpi_heartbeat_timeout: 10000 dlpi_heartbeat_quantum: 1000 nw_bandwidth: 80 bandwidth: 70 ip_address: 172.16.202.18 netmask: 255.255.255.248 Port Names: 0 Port State(0): Enabled Transport Adapter: net2 State: Enabled Transport Type: dlpi device_name: net device_instance: 2 lazy_free: 1 dlpi_heartbeat_timeout: 10000 dlpi_heartbeat_quantum: 1000 nw_bandwidth: 80 bandwidth: 70 ip_address: 172.16.202.10 netmask: 255.255.255.248 Port Names: 0 Port State(0): Enabled --- Transport Adapters for Monyet1 --- Transport Adapter: net0 State: Enabled Transport Type: dlpi device_name: net device_instance: 0 lazy_free: 1 dlpi_heartbeat_timeout: 10000 dlpi_heartbeat_quantum: 1000 nw_bandwidth: 80 bandwidth: 70 ip_address: 172.16.202.19 netmask: 255.255.255.248 Port Names: 0 Port State(0): Enabled Transport Adapter: net2 State: Enabled Transport Type: dlpi device_name: net device_instance: 2 lazy_free: 1 dlpi_heartbeat_timeout: 10000 dlpi_heartbeat_quantum: 1000 nw_bandwidth: 80 bandwidth: 70 ip_address: 172.16.202.11 netmask: 255.255.255.248 Port Names: 0 Port State(0): Enabled
simulasi High-Availability Storage dengan 3 node Solaris Cluster dan iSCSI dengan Solaris 11 & Solaris Cluster 4.1.
setelah Solaris Cluster sudah up..
root@Monyet1:~# cluster show -t global === Cluster === Cluster Name: kandang-monyet clusterid: 0x51C6CA70 installmode: disabled heartbeat_timeout: 10000 heartbeat_quantum: 1000 private_netaddr: 172.16.0.0 private_netmask: 255.255.240.0 max_nodes: 62 max_privatenets: 10 num_zoneclusters: 12 num_xip_zoneclusters: 3 udp_session_timeout: 480 concentrate_load: False resource_security: SECURE global_fencing: prefer3 Node List: Monyet3, Monyet2, Monyet1 root@Monyet1:~# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ Monyet3 Online Monyet2 Online Monyet1 Online
Add quorum disk device
root@Monyet1:~# cldevice status
=== Cluster DID Devices ===
Device Instance Node Status
--------------- ---- ------
/dev/did/rdsk/d1 Monyet3 Unmonitored
/dev/did/rdsk/d2 Monyet1 Ok
Monyet2 Ok
Monyet3 Ok
/dev/did/rdsk/d3 Monyet2 Unmonitored
/dev/did/rdsk/d5 Monyet1 Unmonitored
Enable automatic node reboot if all monitored disk fail:
root@Monyet1:~# clnode set -p reboot_on_path_failure=enabled + root@Monyet1:~# clnode show === Cluster Nodes === Node Name: Monyet3 Node ID: 1 Enabled: yes privatehostname: clusternode1-priv reboot_on_path_failure: enabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x51C6CA7000000001 Transport Adapter List: net0, net2 Node Name: Monyet2 Node ID: 2 Enabled: yes privatehostname: clusternode2-priv reboot_on_path_failure: enabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x51C6CA7000000002 Transport Adapter List: net0, net2 Node Name: Monyet1 Node ID: 3 Enabled: yes privatehostname: clusternode3-priv reboot_on_path_failure: enabled globalzoneshares: 1 defaultpsetmin: 1 quorum_vote: 1 quorum_defaultvote: 1 quorum_resv_key: 0x51C6CA7000000003 Transport Adapter List: net0, net2
Registering the cluster storage & Network service
root@Monyet1:~# clresourcetype register SUNW.gds SUNW.HAStoragePlus
Create Resource Group dari group Monyet
root@Monyet1:~# clresourcegroup create -n Monyet1,Monyet2,Monyet3 RG-MONYET root@Monyet1:~# clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ RG-MONYET Monyet1 No Unmanaged Monyet2 No Unmanaged Monyet3 No Unmanaged root@Monyet1:~# clresourcegroup manage RG-MONYET root@Monyet1:~# clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ RG-MONYET Monyet1 No Offline Monyet2 No Offline Monyet3 No Offline
Create ZFS pool isi pelem – ‘poolBOKEP’ – sebelom tambah ke resource cluster
root@Monyet1:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c2d0 <SUN-Disk-40GB cyl 1135 alt 2 hd 96 sec 768> /virtual-devices@100/channel-devices@200/disk@0 1. c3t0d0 <iSCSI Disk-0123 cyl 19455 alt 2 hd 255 sec 63> /iscsi/disk@0000iqn.2013-03.org.kebonbinatang.storage1%3Adisk00001,0 Specify disk (enter its number): Specify disk (enter its number): root@Monyet1:~# zpool create poolBOKEP c3t0d0 root@Monyet1:~# zpool export poolBOKEP
tambah ‘poolBOKEP’ sebagai resource group RG-MONYET:
root@Monyet1:~# clresource create -g RG-MONYET -t SUNW.HAStoragePlus -p AffinityOn=TRUE -p Zpools=poolBOKEP -p \ ZpoolsSearchDir=/dev/did/dsk RS-BOKEP-HAS root@Monyet1:~# clresource list RS-BOKEP-HAS root@Monyet1:~# clresource show === Resources === Resource: RS-BOKEP-HAS Type: SUNW.HAStoragePlus:10 Type_version: 10 Group: RG-MONYET R_description: Resource_project_name: default Enabled{Monyet1}: True Enabled{Monyet2}: True Enabled{Monyet3}: True Monitored{Monyet1}: True Monitored{Monyet2}: True Monitored{Monyet3}: True
Import poolBokep tadi:
root@Monyet1:~# zpool import poolBOKEP
tambah Virtual IP resource untuk resource group RG-MONYET :
root@Monyet1:~# clreslogicalhostname create -g RG-MONYET -h Monyet -N \ sc_ipmp0@Monyet1,sc_ipmp0@Monyet2,sc_ipmp0@Monyet3 RS-MONYET root@Monyet1:~# clresource list RS-MONYET RS-BOKEP-HAS root@Monyet1:~# clresource show === Resources === Resource: RS-BOKEP-HAS Type: SUNW.HAStoragePlus:10 Type_version: 10 Group: RG-MONYET R_description: Resource_project_name: default Enabled{Monyet1}: True Enabled{Monyet2}: True Enabled{Monyet3}: True Monitored{Monyet1}: True Monitored{Monyet2}: True Monitored{Monyet3}: True Resource: RS-MONYET Type: SUNW.LogicalHostname:4 Type_version: 4 Group: RG-MONYET R_description: Resource_project_name: default Enabled{Monyet1}: True Enabled{Monyet2}: True Enabled{Monyet3}: True Monitored{Monyet1}: True Monitored{Monyet2}: True Monitored{Monyet3}: True
sampai sini tinggal pindah2 resource group ke Monyet-Monyet yg laen [failover]
root@Monyet1:~# clresourcegroup switch -n Monyet3 RG-MONYET
atau mau di balikin ulang
root@Monyet1:~# clresourcegroup remaster RG-MONYET
tinggal isi pelem di poolBOKEP dah
root@Monyet3:~# scstat ------------------------------------------------------------------ -- Cluster Nodes -- Node name Status --------- ------ Cluster node: Monyet3 Online Cluster node: Monyet2 Online Cluster node: Monyet1 Online ------------------------------------------------------------------ -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: Monyet3:net2 Monyet2:net2 Path online Transport path: Monyet3:net0 Monyet2:net0 Path online Transport path: Monyet3:net2 Monyet1:net2 Path online Transport path: Monyet3:net0 Monyet1:net0 Path online Transport path: Monyet2:net2 Monyet1:net2 Path online Transport path: Monyet2:net0 Monyet1:net0 Path online ------------------------------------------------------------------ -- Quorum Summary from latest node reconfiguration -- Quorum votes possible: 5 Quorum votes needed: 3 Quorum votes present: 5 -- Quorum Votes by Node (current status) -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: Monyet3 1 1 Online Node votes: Monyet2 1 1 Online Node votes: Monyet1 1 1 Online -- Quorum Votes by Device (current status) -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/d2s2 2 2 Online ------------------------------------------------------------------ -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- -- Device Group Status -- Device Group Status ------------ ------ -- Multi-owner Device Groups -- Device Group Online Status ------------ ------------- ------------------------------------------------------------------ -- Resource Groups and Resources -- Group Name Resources ---------- --------- Resources: RG-MONYET RS-BOKEP-HAS RS-MONYET -- Resource Groups -- Group Name Node Name State Suspended ---------- --------- ----- --------- Group: RG-MONYET Monyet1 Offline No Group: RG-MONYET Monyet2 Online No Group: RG-MONYET Monyet3 Offline No -- Resources -- Resource Name Node Name State Status Message ------------- --------- ----- -------------- Resource: RS-BOKEP-HAS Monyet1 Offline Offline Resource: RS-BOKEP-HAS Monyet2 Online Online Resource: RS-BOKEP-HAS Monyet3 Offline Offline Resource: RS-MONYET Monyet1 Offline Offline - LogicalHostname offline. Resource: RS-MONYET Monyet2 Online Online - LogicalHostname online. Resource: RS-MONYET Monyet3 Offline Offline ------------------------------------------------------------------ -- IPMP Groups -- Node Name Group Status Adapter Status --------- ----- ------ ------- ------ IPMP Group: Monyet3 sc_ipmp0 Online net1 Online IPMP Group: Monyet2 sc_ipmp0 Online net1 Online IPMP Group: Monyet1 sc_ipmp0 Online net1 Online ------------------------------------------------------------------
berhubung pakai iSCSI untuk share-storage ke semua buaya
# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <HITACHI-H103014SCSUN146G-A160-136.73GB> /pci@0/pci@0/pci@2/scsi@0/sd@0,0 1. c2t2d0 <iSCSIDisk-0123 cyl 6524 alt 2 hd 255 sec 63> /iscsi/disk@0000iqn.2011-03.org.kebonbinatang.storage2%3Adisk20001,0 2. c2t3d0 <iSCSIDisk-0123 cyl 6524 alt 2 hd 255 sec 63> /iscsi/disk@0000iqn.2011-03.org.kebonbinatang.storage2%3Adisk20001,1 Specify disk (enter its number): Specify disk (enter its number):
# clq show === Cluster Nodes === Node Name: Buaya1 Node ID: 1 Quorum Vote Count: 1 Reservation Key: 0x51C625D900000001 Node Name: Buaya2 Node ID: 2 Quorum Vote Count: 1 Reservation Key: 0x51C625D900000002 === Quorum Devices === Quorum Device Name: d2 Enabled: yes Votes: 1 Global Name: /dev/did/rdsk/d2s2 Type: shared_disk Access Mode: scsi2 Hosts (enabled): Buaya1, Buaya2
cldevice show === DID Device Instances === DID Device Name: /dev/did/rdsk/d1 Full Device Path: Buaya2:/dev/rdsk/c2t3d0 Full Device Path: Buaya1:/dev/rdsk/c2t3d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d2 Full Device Path: Buaya1:/dev/rdsk/c2t2d0 Full Device Path: Buaya2:/dev/rdsk/c2t2d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d3 Full Device Path: Buaya1:/dev/rdsk/c1t0d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d4 Full Device Path: Buaya2:/dev/rdsk/c1t0d0 Replication: none default_fencing: global
disable device d2 yg shared tadi
# cldevice set -p default_fencing=nofencing-noscrub d2 # # cldevice show === DID Device Instances === ..... ..... DID Device Name: /dev/did/rdsk/d2 Full Device Path: Buaya1:/dev/rdsk/c2t2d0 Full Device Path: Buaya2:/dev/rdsk/c2t2d0 Replication: none default_fencing: nofencing ..... .....
download Oracle Solaris Cluster 4.1. kali ini base OS nya Solaris 11 dengan IPS
mount source repository dan refresh IPS publisher nya:
# mount -F hsfs /export/home/ozzie/osc-4_1-ga-repo-full.iso /mnt/ # pkg publisher PUBLISHER TYPE STATUS P LOCATION solaris origin online F http://pkg.oracle.com/solaris/release/ # pkg set-publisher -G "*" -g file:///mnt/repo ha-cluster # pkg refresh # pkg publisher PUBLISHER TYPE STATUS P LOCATION ha-cluster origin online F file:///mnt/repo/ solaris origin online F http://pkg.oracle.com/solaris/release/ # pkg install ha-cluster-framework-full Packages to install: 26 Create boot environment: No Create backup boot environment: Yes Services to change: 6 DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 26/26 2794/2794 27.5/27.5 0B/s PHASE ITEMS Installing new actions 3936/3936 Updating package state database Done Updating image state Done Creating fast lookup database Done
Create cluster /usr/cluster/bin/scinstall
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node 2) Upgrade this cluster node 3) Manage a dual-partition upgrade * 4) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1 *** New Cluster and Cluster Node Menu *** Please select from any one of the following options: 1) Create a new cluster 2) Create just the first node of a new cluster on this machine 3) Add this machine as a node in an existing cluster ?) Help with menu options q) Return to the Main Menu Option:
>>> Cluster Name <<< Each cluster has a name assigned to it. The name can be made up of any characters other than whitespace. Each cluster name should be unique within the namespace of your enterprise. What is the name of the cluster you want to establish? Kandang-Monyet >>> Check <<< This step allows you to run cluster check to verify that certain basic hardware and software pre-configuration requirements have been met. If cluster check detects potential problems with configuring this machine as a cluster node, a report of violated checks is prepared and available for display on the screen. Do you want to run cluster check (yes/no) [yes]? >>> Cluster Nodes <<< This Oracle Solaris Cluster release supports a total of up to 16 nodes. List the names of the other nodes planned for the initial cluster configuration. List one node name per line. When finished, type Control-D: Node name (Control-D to finish): Monyet1 Node name (Control-D to finish): Monyet2 Node name (Control-D to finish): Monyet3 Node name (Control-D to finish): ^D >>> Cluster Transport Adapters and Cables <<< Transport adapters are the adapters that attach to the private cluster interconnect. Select the first cluster transport adapter: 1) net0 2) net2 3) Other Option: 1 Searching for any unexpected network traffic on "net0" ... done Unexpected network traffic was seen on "net0". "net0" may be cabled to a public network. Do you want to use "net0" anyway (yes/no) [no]? yes Select the second cluster transport adapter: 1) net0 2) net2 3) Other Option: 2 Searching for any unexpected network traffic on "net2" ... done Unexpected network traffic was seen on "net2". "net2" may be cabled to a public network. Do you want to use "net2" anyway (yes/no) [no]? >>> Confirmation <<< Your responses indicate the following options to scinstall: scinstall -i \ -C kandang-monyet \ -F \ -G lofi \ -T node=Monyet1,node=Monyet2,authtype=sys \ -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=32,maxprivatenets=10,numvirtualclusters=12,numxipvirtualclusters=3 \ -A trtype=dlpi,name=net0 -A trtype=dlpi,name=net2 \ -B type=switch,name=switch1 -B type=switch,name=switch2 \ -m endpoint=:net0,endpoint=switch1 \ -m endpoint=:net2,endpoint=switch2 \ -P task=security,state=SECURE Are these the options you want to use (yes/no) [yes]? Do you want to continue with this configuration step (yes/no) [yes]? Initializing cluster name to "kandang-monyet" ... done Initializing authentication options ... done Initializing configuration for adapter "net0" ... done Initializing configuration for adapter "net2" ... done Initializing configuration for switch "switch1" ... done Initializing configuration for switch "switch2" ... done Initializing configuration for cable ... done Initializing configuration for cable ... done Initializing private network address options ... done Setting the node ID for "Monyet1" ... done (id=1)
just reminder
ketika konfigurasi X-Forwarding ssh di mesin HP-UX udah fix.. tapi muncul error :
Error: Can’t open display:
Error: Couldn’t find per display information
sedangkan / seumpama / andaikata pengen running aplikasi yg perlu GUI..
# echo "hosts: files dns" > /etc/nsswitch.conf
sepele sihh… tapi daripada panik?
Pre-Requisites:
enable X Forwarding via SSH
edit /etc/ssh/sshd_config
# X11 tunneling options
X11Forwarding yes
default Oracle Business Intelligence URL:
Component | Default URL | Port |
Oracle BI Presentation Services | http://host:9704/analytics | 9704 |
WebLogic Console | http://host:7001/console | 7001 |
Enterprise Manager | http://host:7001/em | 7001 |
Business Intelligence Publisher | http://host:9704/xmlpserver | 9704 |
Real-Time Decisions | http://host:9704/ui | 9704 |
bersambung..
Oracle VM Manager dengan base OS menggunakan Oracle Linux 6.x. install OS seperti biasa..
Minimum spec prerequisite OVM:
semua source dapat di download di: https://edelivery.oracle.com/
untuk update packages Oracle Linux via yum (bisa baca disini
berikut list kebutuhan port untuk komunikasi OVM – OVS – Client
akhirnya ada waktu ngedit video adventure OutBound D2M Hasil Kebon 2013
Tanakita CampSite: 1-3 Maret 2013
Exploring & Build Cloud & Solaris Virtualization (LDOMs & Zone).
just review: Oracle Enterprise Manager Ops Center. semua fitur; monitoring, provisioning, managing, maintaining.. hingga developing pun sudah include development plan… Migrate zone & vm.. server & storage pool..
Enterprise Manager Ops Center 12c & Enterprise Manager Cloud Control 12c
*Just Reminder
kadang install liwat GUI configure domain via quickstart.sh
kalo mesin-mesin SOLARIS non GUI:
# {WLS_HOME}/common/bin/config.sh -mode=console
oiyah ..
untuk solaris 11 harus install jdk, (default nya gak ada javac)
# pkg install --accept pkg:/developer/java/jdk@1.7.0.7-0.175.1.0.0.24.0
![]() |