GuestbookSign our guestbook ContactGet in touch with the authors ArchiveAll unixwerk articles since 2003
January 14, 2013

How to Add a New VG to an Active HACMP Resource Group

 

Contents

  1. Introduction
  2. Identify New Disks
  3. Create a Concurrent-Capable VG
  4. Create the Filesystem(s)
  5. Import on the Other Node
  6. Add the VG to the HACMP Resource Group
  7. Final Synchronization of the Cluster
  1. Related Information

 

1. Introduction

This article describes how to add a new volume group to an existing Resource Group on an active HACMP cluster.

 

2. Identify New Disks

First we need to scan for new disks:

nodeA# cfgmgr

and set a PVID to all new disks:

nodeA# lspv | awk '$3 ~ /^None$/ {print "chdev -l "$1" -a pv=yes"}' | sh

Finally we change the reservation policy for use with HACMP:

nodeA# lspv | awk '$3 ~ /^None$/ {print "chdev -l "$1" -a reserve_policy=no_reserve"}' | sh

The two commands above change all disks not belonging to a volume group. If your cluster owns disks for heartbeat or raw volumes not belonging to a volume group you cannot use these one-liners.

Since the reserve policy is stored in the ODM and not on the disk we have to set it on the other node as well:

nodeB# cfgmgr
nodeB# lspv | awk '$3 ~ /^None$/ {print "chdev -l "$1" -a reserve_policy=no_reserve"}' | sh

For correct mirroring we also need to identify the origin of the disks we found. In our example we find these disks:

  • from DatacenterA: hdisk56, hdisk57, hdisk58, hdisk59
  • from DatacenterB: hdisk60, hdisk61, hdisk62, hdisk63

     

    3. Create a Concurrent-Capable VG

    Before we actually create the new VG we need to find a major number that is available on both nodes¹.

    nodeB#  lvlstmajor
    44..97,108,109,111..118,128..138,148..158,168..178,188,189,192..198,203...
    

    The command lvlstmajor lists all free major numbers on a system. We run the command on both nodes and pick a number that is free all nodes. In our example we pick 203. We use this number when we create the new VG called datavg05:

    nodeA# mkvg -C -V 203 -S -s 32 -y datavg05 hdisk56 hdisk57 hdisk58 hdisk59 hdisk60 hdisk61 hdisk62 hdisk63
    nodeA# varyonvg datavg05
    

    Please note: The volume groups we create are enhanced-concurrent-capable. By default these volume groups have the following properties: Quorum is on, auto-varyon is off. However, quorum should be set to "off" in a two datacenter setup:

    nodeA# chvg -Qn datavg05
    0516-1804 chvg: The quorum change takes effect immediately.
    


    ¹ The same major number for the volume groups' device files is only needed if your cluster acts as a nfs server. However, it's good practice to keep the volume groups' major number in sync across all nodes.

     

    4. Create the Filesystem(s)

    With the volume groups created and vary'd on we can go on creating logical volumes. Since we want to mirror our filesystems between the two datacenters we need to ensure that all first LV copies (PV1 in »lslv -m« output) reside in DatacenterA and all second LV copies (PV2 in »lslv -m« output) reside in DatacenterB. Therefore we first create a non-mirrored minimal LV lvdata05 over the disks from DatacenterA:

    nodeA# mklv -e x -u 4 -s s -t jfs2 -y lvdata05 datavg05 4 hdisk56 hdisk57 hdisk58 hdisk59
    

    We check:

    nodeA# lslv -m lvdata05
    lvdata05:N/A
    LP    PP1  PV1               PP2  PV2               PP3  PV3
    0001  0206 hdisk56          
    0002  0206 hdisk57          
    0003  0206 hdisk58         
    0004  0206 hdisk59         
    

    Now we can add the mirror to the LV and create a filesystem on top of it:

    nodeA# mklvcopy -e x lvdata05 2 hdisk60 hdisk61 hdisk62 hdisk63
    nodeA# varyonvg datavg05          # this synchronizes the mirror
    nodeA# crfs -v jfs2 -m /path/to/data05 -d /dev/lvdata05 -A no -p rw -a logname=INLINE
    nodeA# mount /path/to/data05
    nodeA# chown user:group /path/to/data05
    

    We check our mirror:

    nodeA# lslv -m lvdata05
    lvdata05:N/A
    LP    PP1  PV1               PP2  PV2               PP3  PV3
    0001  0206 hdisk56           0206 hdisk60
    0002  0206 hdisk57           0206 hdisk61
    0003  0206 hdisk58           0206 hdisk62
    0004  0206 hdisk59           0206 hdisk63
    

    Since we set Superstrictness and an Upperbound of 4 we can now increase the filesystem to the final size:

    nodeA# chlv -x 16384²
    nodeA# chfs -a size=512G /path/to/data05
    

    Once all filesystems are created we unmount everything we just created and close the volume group:

    nodeA# umount /path/to/data05
    nodeA# varyoffvg datavg05
    


    ² The default value for the maximum LPs per LV is 512 when you create a small LV. Thus changing the maximal number of LPs for a LV is needed before we can extend the filesystem.

     

    5. Import on the Other Node

    We need one PVID of VG (from the first node):

    nodeA# lspv | grep -w datavg05 | head -1
    hdisk56         00f6b4e4935bcb40                    datavg05
    

    On the other node we need to run the configuration manager to scan for disks and to set the reserve_policy to no_reserve:

    nodeB# cfgmgr
    nodeB# lspv | awk '$3 ~ /^None$/ {print "chdev -l "$1" -a reserve_policy=no_reserve"}' | sh
    

    Now we are ready to import the new Volume Groups into the other node.

    nodeB# importvg -n -V 203 -y datavg05 00f6b4e4935bcb40
    

     

    6. Add the VG to the HACMP Resource Group

    Before we start to add the new VG to our resource group it might be a good idea to check the cluster configuration first:

    nodeA# smitty hacmp
    -> Problem Determination Tools
       -> HACMP Verification
          -> Verify HACMP Configuration
    

    Correct any errors *before* changing resource groups!

    Now it's time to let HACMP discover information:

    nodeA# smitty hacmp
    -> Extended Configuration
       -> Discover HACMP-related Information from Configured Nodes
    

    Finally we add the new VG to the HACMP resource group (RG01 in our example):

    nodeA# smitty hacmp
    -> Extended Configuration
       -> Extended Resource Configuration
          -> HACMP Extended Resource Group Configuration
             -> Change/Show Resources and Attributes for a Resource Group
    

    select RG01 and add datavg05 to the list of volume groups:

      ________________________________________________________________________________________________________________
      
      Service IP Labels/Addresses                        [haservice1]                                                +
      Application Servers                                [APP_SRV1]                                                  +
      
      Volume Groups                                      [datavg01 datavg02 datavg03 datavg04 datavg05 ]             +
      Use forced varyon of volume groups, if necessary    true                                                       +
      Automatically Import Volume Groups                  false                                                      +
    
      Filesystems (empty is ALL for VGs specified)       [ ]                                                         +
      Filesystems Consistency Check                       logredo                                                    +
      Filesystems Recovery Method                         sequential                                                 +
      ________________________________________________________________________________________________________________
    

     

    7. Final Synchronization of the Cluster

    Now synchronize the cluster...

    nodeA# smitty hacmp
       -> Extended Configuration
         -> Extended Verification and Synchronization
    

    This will varyon the VG and mount the filesystem.

     

    A. Related Information

     

    Also on unixwerk