HP virtual server configuration as Service Guard package
Contents
- Overview
- software requirements
- network configuration
- APA
- VLAN
- virtual switch configuration
- SAN and HBA configuration
- HPVM creation
- GUID server
- install HP-UX on VM
- configure Service Guard
- move HPVM in Service Guard
1. Overview
I must install HP virtual server with HP-UX 11.31 as Service Guard package.
I use NPIV (virtual HBA) . The advantage is that the VM has own WWN's and the
SAN team can make zoning directly to the VM.
The VM host see no disk from the virtual servers.
2. software requirements
I use following software versions:
ServiceGuard A.11.20.00 Serviceguard SD Product PHSS_42137 1.0 Serviceguard A.11.20.00 SG-IVS-TKIT B.02.00 Serviceguard Toolkit for Integrity Virtual Servers GUIDManager A.01.00.579 HP-UX GUID Manager BB068AA B.06.10 HP-UX vPars & Integrity VM v6 __________________________________________________________________________________________ In the product BB068AA are following filesets: BB068AA.AVIO-HVSD B.11.31.1203 HPVM Host AVIO Storage Software BB068AA.AVIO-HSSN B.11.31.1203 HP AVIO LAN HSSN Host Driver BB068AA.vmGuestSW B.06.10 Integrity VM vmGuestSW __________________________________________________________________________________________ HPVM B.06.10 Integrity VM HPVM PHKL_42444 1.0 vm cumulative patch PHKL_42449 1.0 vm page table mgmt cumulative patch PHKL_42484 1.0 HPVM Fix RID length, guest para-virtualization vmGuestLib B.06.10 Integrity VM vmGuestLib vmGuestSW B.06.10 Integrity VM vmGuestSW vmKernel B.06.10 Integrity VM vmKernel vmProvider B.06.10 WBEM Provider for Integrity VM vmProvider vmVirtProvider B.06.10 Integrity VM vmVirtProvider vmmgr A.6.1.0.89276 HP-UX Integrity Virtual Server ManagerFor HP-UX 11.31 guests, the guest depot file is:
/opt/hpvm/guest-images/hpux/11iv3/hpvm_guest_depot.11iv3.sd
Operating system patches to optimize virtual machine operation
Integrity VM management tools, including hpvmcollect and hpvminfo
The VM Provider, which allows you to use the VM Manager to manage the guest.
Install the latest patche and System Fault Management software from software.hp.com.
All virtual machines has virtual FibreChanel Cards (NPIV)
The plus is, that we need no disk mapping to HA hosts.
The disk are direct zoned to VM. On physical host, you can not see this disk.
For critical Host/Application the disk are mirrored on VM host to other DC.
3. network configuration
For production LAN and backup lan we use different networks for different VM's.
We use VLANs on this interfaces on the VM Host.
4. APA
enable APA
login via console
search for the interface pair
Build apa interface. You can also use the ncweb user interface for configuration.
# nwmgr -a -A links=0,1 -A mode=MANUAL -A lb=LB_HOT_STANDBY -c lan900 -S apa # nwmgr -s -S apa -A all --saved --from cu
This mean lan0 and lan1 bundel to lan900 in HOT_STANBY_MODE
You can display configuration
# nwmgr -g -S apa -I 900 -v lan900 current values: Speed = 1 Gbps Full Duplex MTU = 1500 Virtual Maximum Transmission Unit = 32160 MAC Address = 0xXXXXXXXXXXXX Network Management ID = 14 Features = Linkagg Interface IPV4 Recv CKO IPV4 Send CKO VLAN Support VLAN Tag Offload 64Bit MIB Support IPV4 TCP Segmentation Offload Load Distribution Algorithm = LB_HS Mode = MANUAL Parent PPA = - APA State = Up Membership = 0,1 Active Port(s) = 0 Not Ready Port(s) = - Fixed Mac Address = off
Now can edit /etc/rc.config.d/netconf and change/add or what ever for your lan900 interface.
Example:
INTERFACE_NAME[0]="lan900" IP_ADDRESS[0]="XX.XX.XX.XX" SUBNET_MASK[0]="255.255.255.0" BROADCAST_ADDRESS[0]="XX.XX.XX.XX" INTERFACE_STATE[0]="up" DHCP_ENABLE[0]="" INTERFACE_MODULES[0]=""Display apa configuration
# nwmgr -g -S apa Class Mode Load Speed- Membership Instance Balancing Duplex ======== =========== ========= ==================== =========================== lan900 MANUAL LB_HS 1 Gbps Full Duplex 0,1 lan901 MANUAL LB_HS 1 Gbps Full Duplex 2,3 lan902 MANUAL LB_HS 1 Gbps Full Duplex 4,5 lan903 MANUAL LB_HS 1 Gbps Full Duplex 10,11 # nwmgr Name/ Interface Station Sub- Interface Related ClassInstance State Address system Type Interface ============== ========= ============== ======== ============== ========= lan0 UP 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan900 lan1 UP 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan900 lan2 UP 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan901 lan3 UP 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan901 lan4 UP 0xXXXXXXXXXXXX iether 1000Mb/s lan902 lan5 UP 0xXXXXXXXXXXXX iether 1000Mb/s lan902 lan6 DOWN 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan7 DOWN 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan8 DOWN 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan9 DOWN 0xXXXXXXXXXXXX iexgbe 10GBASE-KR lan10 UP 0xXXXXXXXXXXXX iether 1000Mb/s lan903 lan11 UP 0xXXXXXXXXXXXX iether 1000Mb/s lan903 lan900 UP 0xXXXXXXXXXXXX hp_apa hp_apa lan901 UP 0xXXXXXXXXXXXX hp_apa hp_apa lan902 UP 0xXXXXXXXXXXXX hp_apa hp_apa lan903 UP 0xXXXXXXXXXXXX hp_apa hp_apa lan904 DOWN 0xXXXXXXXXXXXX hp_apa hp_apa
5. VLAN
If you have one physical network interface with more as one LAN you must enable VLAN tagging.
Port configuration on the Bladeswitch(Cisco) like this example:
interface GigabitEthernet7/0/9 description switchport trunk allowed vlan 77,78 switchport mode trunk no logging event link-status no snmp trap link-status spanning-tree portfast trunk end interface GigabitEthernet8/0/9 description switchport trunk allowed vlan 77,78 switchport mode trunk no logging event link-status no snmp trap link-status spanning-tree portfast trunk endHostsystem:
Configure VLAN interface with ncweb for specific physical interface.
After that, check the config in /etc/rc.config.d/vlanconf
Example: extract from /etc/rc.config.d/vlanconf for VLAN 77 network XXX.XX.X.X/16 on apa interface lan902
VLAN_VPPA[0]=5000 VLAN_PHY_INTERFACE[0]=902 VLAN_ID[0]=77 VLAN_NAME[0]=77 VLAN_PRIORITY[0]=0 VLAN_PRI_OVERRIDE[0]=CONF_PRI VLAN_TOS[0]=0 VLAN_TOS_OVERRIDE[0]=IP_HEADERCheck the configuration:
# lanadmin -V scan VLAN Physical VLAN PRI Pri ToS Tos Name Interface Interface ID Override Override Name Level Level lan5000 lan902 77 0 CONF_PRI 0 IP_HEADER 77Now configure the interface lan5000 in /etc/rc.config.d/netconf and on running system with:
# ifconfig lan5000 XXX.XX.X.XX netmask 255.255.0.0 broadcast XXX.XX.255.255 upextract from /etc/rc.config.d/netconf
INTERFACE_NAME[2]="lan5000" IP_ADDRESS[2]="XXX.XX.X.XX" SUBNET_MASK[2]="255.255.0.0" BROADCAST_ADDRESS[2]="XXX.XX.255.255" INTERFACE_STATE[2]="up" DHCP_ENABLE[2]="0" INTERFACE_MODULES[2]=""On HPVM system:
Configure VLAN interface with ncweb for specific physical interface.
After that, check the config in /etc/rc.config.d/vlanconf
Example: extract /etc/rc.config.d/vlanconf for VLAN 88 network XXX.XX.0.0/16
VLAN_VPPA[0]=5000 VLAN_PHY_INTERFACE[0]=2 VLAN_ID[0]=78 VLAN_NAME[0]=78 VLAN_PRIORITY[0]=0 VLAN_PRI_OVERRIDE[0]=CONF_PRI VLAN_TOS[0]=0 VLAN_TOS_OVERRIDE[0]=IP_HEADERCheck the configuration:
# lanadmin -V scan VLAN Physical VLAN PRI Pri ToS Tos Name Interface Interface ID Override Override Name Level Level lan5000 lan2 78 0 CONF_PRI 0 IP_HEADER 78Now configure the interface lan5000 in /etc/rc.config.d/netconf and on running system with:
# ifconfig lan5000 XXX.XX.X.XX netmask 255.255.0.0 broadcast XXX.XX.255.255 upRepeat the VLAN configuration steps with all interface on which use VLAN's.
For switchable HPVM's must the interface config on both nodes the same.
You can check the LAN config with following commands:
- nwmgr - netstat -ni - landamin -V scan
6. virtual switch configuration
First we create virt. switches for VM's. Do this on both nodes.
# hpvmnet -c -S vswitch01 -n 900 # hpvmnet -c -S vswitch02 -n 901 # hpvmnet -c -S vswitch03 -n 902 # hpvmnet -c -S vswitch04 -n 903Enable VLAN on switch vswitch03 on all ports.
# hpvmnet -S vswitch03 -i portid:all:vlanid:all
7. SAN and HBA configuration
It's important that the fibre channel interfaces see the same N-Port ID's over
the same device on each host. Example:
Both nodes has the same FC devices fcd0 and fcd2
You can check this with following commands:
physicalhost1:
# fcdutil /dev/fcd0 get remote all Target N_Port_id is = 0x172400 <---------------------------- Target state = DSM_READY Symbolic Port Name = HP DISK-SUBSYSTEM 6007 Symbolic Node Name = None Port Type = N_PORT FCP-2 Support = NO Target Port World Wide Name = 0xXXXXXXXXXXXXXXXX Target Node World Wide Name = 0XXXXXXXXXXXXXXXX Common Service parameters (all values shown in hex): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Common Features : 8800 RO_Bitmap: ffff Total Conseq: ff Class 3 Service parameters (all values shown in hex): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Open Sequences/Exchg: 1 Conseq: ff Recipient Control Flags: 0 Rxsz: 800If you type the same commands on other host physicalhost2, you must see the same NPORTID!
# fcdutil /dev/fcd0 get remote all Target N_Port_id is = 0x172400 <---------------------------- Target state = DSM_READY Symbolic Port Name = HP DISK-SUBSYSTEM 6007 Symbolic Node Name = None Port Type = N_PORT FCP-2 Support = NO Target Port World Wide Name = 0xXXXXXXXXXXXXXXXX Target Node World Wide Name = 0xXXXXXXXXXXXXXXXX Common Service parameters (all values shown in hex): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Common Features : 8800 RO_Bitmap: ffff Total Conseq: ff Class 3 Service parameters (all values shown in hex): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Open Sequences/Exchg: 1 Conseq: ff Recipient Control Flags: 0 Rxsz: 800Repeat this with each FC port which you are use. Now can you can you configure th HPVM on one site.
8. HPVM creation
Create VM with 4 CPU's and 4 GB memory.
# hpvmcreate -P vmhost1 -c 4 -r 4GAdd lan interfaces with hpvmmodify command. Syntax:
hpvmmodify -P vmhost1 -a network:adaptertype:bus,device,mac-addr:vswitch:vswitchname:portid:portnumber
# hpvmmodify -P vmhost1 -a network:avio_lan:1,0,:vswitch:vswitch01Repeat this step for each desired LAN interface.
# hpvmmodify -P vmhost1 -a network:avio_lan:2,0,:vswitch:vswitch02 # hpvmmodify -P vmhost1 -a network:avio_lan:3,0,:vswitch:vswitch03 # hpvmmodify -P vmhost1 -a network:avio_lan:4,0,:vswitch:vswitch04
9. GUID server
Note: The ssh pub key from location /opt/guid/var/guidmgr_rsa.pub must add inauthorized_keys file in home from user guiddb.
setup the WWN Range (example):
# /opt/guid/bin/guidmgmt -S wwn 0x5001XXXXXXXXXXXX 0x5001XXXXXXXXXXXXNow can you add npiv device to VM - you must reboot to activate
# hpvmmodify -P host1 -a hba:avio_stor::npiv:/dev/fcd0 # hpvmmodify -P host1 -a hba:avio_stor::npiv:/dev/fcd2After reboot:(from hostsystem)
# hpvmstatus -p1 -d [Virtual Machine Devices] [Storage Interface Details] disk:avio_stor:0,0,0:disk:/dev/rdisk/disk22 hba:avio_stor:0,1,0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX:npiv:/dev/fcd0 <---- hba:avio_stor:0,2,0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX:npiv:/dev/fcd2 <---- [Network Interface Details] network:avio_lan:1,0,0xXXXXXXXXXXXX:vswitch:vswadm01:portid:1 [Direct I/O Interface Details] [Misc Interface Details] serial:com1::tty:consoleCheck association physical adapter to virtual HBA:(on host system)
# /opt/fcms/bin/fcmsutil /dev/fcd2 npiv_info PFC Hardware Path = 0/0/0/7/0/0/0 PFC DSF = /dev/fcd2 PFC Class Instance = 2 PFC Driver state = ONLINE PFC Port WWN = 0x5001XXXXXXXXXXXX PFC Node WWN = 0x5001XXXXXXXXXXXX PFC Switch Port WWN = 0x2e46XXXXXXXXXXXX PFC Switch Node WWN = 0x1000XXXXXXXXXXXX FlexFC Virtual Fibre Channel (VFC) ---------------------------------- Maximum Supported FlexFC VFC = 16 Number Active FlexFC VFC = 0 HPVM Virtual Fibre Channel (VFC) ---------------------------------- Maximum Supported HPVM VFC = 16 Number Active HPVM VFC = 1 The following provides the list of VFC(s) associated with this PFC: Type = HPVM VFC VFC Index = 17 VFC Guest ID = 0x1 VFC Port WWN = 0x5001XXXXXXXXXXXX <--- VFC Node WWN = 0x5001XXXXXXXXXXXX VFC Driver state = ONLINE VFC DSF = /dev/fcd9 VFC Class Instance = 9On guest system:
# ioscan -kfNd gvsd Class I H/W Path Driver S/W State H/W Type Description ================================================================= ext_bus 0 0/0/0/0 gvsd CLAIMED INTERFACE HPVM AVIO Stor Adapter ext_bus 2 0/0/1/0 gvsd CLAIMED INTERFACE HPVM NPIV Stor Adapter ext_bus 3 0/0/2/0 gvsd CLAIMED INTERFACE HPVM NPIV Stor AdapterNow can you zone your san disks to the VFC Port WWNs
On hpvmconsole you must set drvcfg example:
Shell> drvcfg -s HP AVIO Stor Driver Configuration ================================== Warning: Enumerating all SCSI or FC LUNs increases initialization times. Enumerate all SCSI LUNs (Y/N)? [current setting: N]: N Enumerate all FC LUNs (Y/N)? [current setting: N]: Y resetAfter reboot you see SAN devices in EFI with devices
If you will set the status of a wwn to free for new usage on guid server type:
# guidmgmt -x -f FREE www 0x5001XXXXXXXXXXXXNow can you add FibreChannel HBA's to the HPVM.
# hpvmmodify -P host1 -a hba:avio_stor::npiv:/dev/fcd0 # hpvmmodify -P host1 -a hba:avio_stor::npiv:/dev/fcd2The WWN's come from the GUID server. You can see the WWN's with the command.
On VM Host:
# fcmsutil /dev/fcd0 npiv_infoor with
# hpvmstatus -P vmhost1Attention! If you make the HPVM high available, you must type self the WWN's on the second node.
If you not type the WWN'S on second node and you have configure the second node as GUID client,
the GUID Server assign other WWN's to the HPVM!
Check the HPVM configuration:
# hpvmstatus -P vmhost1 Virtual Machine Name VM # Type OS Type State ==================== ===== ==== ======= ======== vmhost1 1 SH HPUX On (OS) [Runnable Status Details] Runnable status : Runnable [Remote Console] Remote Console not configured [Authorized Administrators] Oper Groups : Admin Groups : Oper Users : Admin Users : [Virtual CPU Details] #vCPUs Ent Min Ent Max ====== ======= ======= 4 10.0% 100.0% [Memory Details] Total Reserved Memory Memory ======= ======== 4 GB 64 MB [Dynamic Memory Information] NOTE: Dynamic data unavailable, configured values only Minimum Target Memory Maximum Memory Memory Entitlement Memory =========== =========== =========== =========== 512 MB 4096 MB - 4096 MB [Storage Interface Details] vPar/VM Physical Device Adapter Bus Dev Ftn Tgt Lun Storage Device ======= ========== === === === === === ========= ========================= hba avio_stor 0 1 npiv /dev/fcd0-0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX hba avio_stor 1 1 npiv /dev/fcd2-0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX [Network Interface Details] Interface Adaptor Name/Num PortNum Bus Dev Ftn Mac Address ========= ========== ===================== ======= === === === ================= vswitch avio_lan vswadm01 1 1 0 0 66-a2-a1-7d-45-15 vswitch avio_lan vswadm02 1 2 0 0 b2-53-c3-02-79-50 vswitch avio_lan vswitch03 1 3 0 0 f6-d8-bc-10-ba-22 vswitch avio_lan vswprd01 1 4 0 0 32-a6-48-a3-a7-68 [Direct I/O Interface Details] vPar/VM Physical Device Adapter Bus Dev Ftn Mac Address Storage Device ======= ======= === === === ================= ========= =========== [Misc Interface Details] vPar/VM Physical Device Adapter Bus Dev Ftn Tgt Lun Storage Device ======= ========== === === === === === ========= ========================= serial com1 tty consoleStart the HPVM:
# hpvmstart -P vmhost1Go to console from physicalHost1 host.
# hpvmconsole -P vmhost1Check the status of the OS.
10. install HP-UX on VM
After the VM reseted, create a bootprofile to install the Golden Image from Ignite server.
Create netboot profile from EFI shell with dbprofile.
# dbprofile -dn ignite -sip <ignite server ip> -cip <client IP> -m 255.255.255.0 -b "/opt/ignite/boot/nbp.efi"Check the dbrofile with type:
# dbprofileAfter this you can recover or install new HP-UX from ignite server.
11. configure Service Guard
Configuration made on node physicalHost1. /etc/cmcluster/cmclconfig.ascii:
CLUSTER_NAME cl_hpvm HOSTNAME_ADDRESS_FAMILY IPV4 QS_HOST quorumserver QS_POLLING_INTERVAL 120000000 QS_TIMEOUT_EXTENSION 2000000 NODE_NAME physicalHost1 NETWORK_INTERFACE lan900 HEARTBEAT_IP XX.X.XX.XX NETWORK_INTERFACE lan901 HEARTBEAT_IP XX.X.XX.XX NETWORK_INTERFACE lan5000 STATIONARY_IP XX.X.XX.XX NODE_NAME physicalHost2 NETWORK_INTERFACE lan900 HEARTBEAT_IP XX.X.XX.XX NETWORK_INTERFACE lan901 HEARTBEAT_IP XX.X.XX.XX NETWORK_INTERFACE lan5000 STATIONARY_IP XX.X.XX.XX MEMBER_TIMEOUT 14000000 AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 NETWORK_FAILURE_DETECTION INOUT NETWORK_AUTO_FAILBACK YES SUBNET XX.X.XX.XX IP_MONITOR ON POLLING_TARGET XX.X.XX.XX SUBNET XX.X.XX.XX IP_MONITOR OFF SUBNET XX.X.XX.XX IP_MONITOR OFF MAX_CONFIGURED_PACKAGES 300 USER_NAME ANY_USER USER_HOST ANY_SERVICEGUARD_NODE USER_ROLE MONITORNow run cmcheckconf -C cmclconfig.ascii. If now errors, apply the config:
# cmapplyconf -C cmclconfig.asciiStart the cluster:
# cmrunclCheck status:
# cmviewcl CLUSTER STATUS cl_hpvm up NODE STATUS STATE physicalHost1 up running physicalHost2 up running
12. move HPVM in Service Guard
Create on second host same network switches for HPVM as on host 1.Create the same HPVM, with same name and same vHBAs.
Example:
Create vmhost1 with same values as on the fist host. 4 CPUs and 4 GB memory.
# hpvmcreate -P vmhost1 -c 4 -r 4GNow create the same vHBAs with the same WWNs.
# hpvmmodify P vmhost1 -a hba:avio_stor:0,1,0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX:npiv:/dev/fcd0 # hpvmmodify P vmhost1 -a hba:avio_stor:1,1,0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX:npiv:/dev/fcd0Now must correct the HPVM UUID. You can find this on first host in /var/opt/hpvm/guests
# /var/opt/hpvm/guests physicalHost1 # ll total 0 lrwxr-xr-x 1 root hpvmsys 56 Sep 26 10:19 vmhost1 -> /var/opt/hpvm/uuids/ee1bf5d0-07b2-11e2-9a0e-2c4138317742On second host change the uuid that has the same value as on first host:
# hpvmmodify -p 2 -x guest_uuid="ee1bf5d0-07b2-11e2-9a0e-2c4138317742
# hpvmmigrate -k -P vmhost1 -h physicalHost2Now move the VM into the ServiceGuard. Run the command on first host.
# cmdeployvpkg -P vmhost1 -n physicalHost1 -n physicalHost2-n physical cluster nodes
Output:
# /etc/cmcluster physicalHost1 # cmdeployvpkg -P vmhost1 -n physicalHost1 -n physicalHost2 This is the HP Serviceguard Integrity Virtual Servers Toolkit package creation script. This script will assist the user to develop and manage Serviceguard packages for VM and associated package configuration files. We recommend you to review and modify the configuration file created by this script, as needed for your particular environment. Do you wish to continue? (y/n):y [Virtual Machine Details] Virtual Machine Name VM # Type OS Type State ==================== ===== ==== ======= ======== vmhost1 3 SH HPUX Off [Storage Interface Details] vPar/VM Physical Device Adapter Bus Dev Ftn Tgt Lun Storage Device ======= ========== === === === === === ========= ========================= hba avio_stor 0 0 npiv /dev/fcd0-0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX hba avio_stor 0 1 npiv /dev/fcd2-0x5001XXXXXXXXXXXX,0x5001XXXXXXXXXXXX [Network Interface Details] Interface Adaptor Name/Num PortNum Bus Dev Ftn Mac Address ========= ========== ===================== ======= === === === ================= vswitch avio_lan vswitch01 1 1 0 0 XX-XX-XX-XX-XX-XX vswitch avio_lan vswitch02 1 2 0 0 XX-XX-XX-XX-XX-XX vswitch avio_lan vswitch03 1 3 0 0 XX-XX-XX-XX-XX-XX vswitch avio_lan vswitch04 1 4 0 0 XX-XX-XX-XX-XX-XX Package the VM summarized above? (y/n):y Checking the VM and cluster configuration Determining package attributes and modules... Creating modular style package files for VM : vmhost1 Review and/or modify the package configuration file (optional)? (y/n):n Copy the package files to each cluster member? (y/n):y The VM has been successfully configured as a Serviceguard package. Use cmcheckconf check the package configuration file (optional)? (y/n):n Apply the package configuration file to the cluster (optional)? (y/n):y Modify the package configuration ([y]/n)? y Completed the cluster update Please see the HP Serviceguard Toolkit for Integrity Virtual Servers user guide for additional instructions on configuring Virtual Machines or Virtual Partitions (vPar) as Serviceguard packages. Before running this package the following steps may need to be performed: 1. Review the files located in /etc/cmcluster/vmhost1/. 2. Add new LVM Volume Groups to the cluster configuration file, if any. 3. Check the cluster and/or package configuration using the cmcheckconf command. 4. Apply the cluster and/or package configuration using the cmapplyconf command. 5. Un-mount file systems and deactivate non-shared volumes used by the VM. 6. Start dependent packages associated with shared LVM, CVM or CFS backing stores. 7. Start the package (on the node where the vmhost1 is running) using: cmrunpkg vmhost1
# cmviewcl CLUSTER STATUS cl_hpvm up NODE STATUS STATE physicalHost1 up running PACKAGE STATUS STATE AUTO_RUN NODE vmhost1 up running enabled physicalHost1 physicalHost2 up runningThe HPVM can running. No HPVM downtime is needed.
# cmdeployvpkg -P vmhost1 -UIn the HPVM self, consider that the disk are mirrored between different DC.