GuestbookSign our guestbook ContactGet in touch with the authors ArchiveAll unixwerk articles since 2003
September 26, 2016

HACMP: Cluster Commandline

 

Contents

  1. PowerHA Version
  2. Program Paths
  3. Log Files

Dynamic Reconfiguration (CSPOC)

  1. »cl_« versus »cli_« Commands
  2. Extend a Volume Group
  3. Reduce a Volume Group
  4. Add a Filesystem to an Existing VG
  5. Increase a Filesystem
  6. Mirror a Logical Volume
  7. Remove a Logical Volume Mirror
  8. Synchronize a Logical Volume Mirror
  9. Remove a Logical Volume
  10. Move a Logical Volume between PVs

Resource Group Management

  1. List all Defined Resource Groups
  2. Move a Resource Group to another Node
  3. Bring a Resource Group Down
  4. Bring a Resource Group Up

Cluster Information

  1. Where Can I Find the Log Files?
  2. Where Can I Finde the Application Start/Stop Scripts?
  3. Show the Configuration of a Particular Resource Group
  4. Cluster IP Configuration

Cluster State

  1. Cluster State
  2. The Cluster Manager
  3. Where are the Resources Currently Active?

New Commands with PowerHA 7

  1. List Repository Disks
  2. Show HACMP Networks
  3. Show Information about a Particular Network
  4. How to Change a Network Parameter

 

1. PowerHA Version

There is no specific command to get the PowerHA version. However, the version of the cluster.es.server.rte fileset actually reflects the PowerHA version

# lslpp -Lqc cluster.es.server.rte | cut -d: -f3
6.1.0.3

If you don't trust the above method to determine the PowerHA version you could also ask the cluster manager:

# lssrc -ls clstrmgrES | egrep '^local node vrmf|^cluster fix level'
local node vrmf is 6103
cluster fix level is "3"


¹ With Version 6.1 Service Pack 10 a new command has been introduced to show the PowerHA version: /usr/es/sbin/cluster/utilities/halevel -s.

 

2. Program Paths

The paths to the cluster commands are not included in the default path. It makes sense to extend the default path to include the cluster paths:

# export PATH=$PATH:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc

 

3. Log Files

hacmp.out
is located under /tmp or /var/hacmp/log in most installations. To be sure you can run cllistlogs:

# /usr/es/sbin/cluster/utilities/cllistlogs
/tmp/hacmp.out


cluster.log

/var/hacmp/adm/cluster.log


clcomd.log

/var/hacmp/clcomd/clcomd.log


More logs can be found under /var/hacmp/log

# ls -l /var/hacmp/log/*.log /var/hacmp/log/*.debug
-rw-r--r--    1 root     system       492803 Sep 15 15:32 /var/hacmp/log/autoverify.log
-rw-r--r--    1 root     system       124997 Sep 23 11:10 /var/hacmp/log/clinfo.log
-rw-r--r--    1 root     system       482541 Sep 23 11:10 /var/hacmp/log/clstrmgr.debug
-rw-------    1 root     system       288599 Sep 23 09:00 /var/hacmp/log/clutils.log
-rw-r--r--    1 root     system       353016 Sep 15 15:33 /var/hacmp/log/cspoc.log
-rw-r--r--    1 root     system        11500 Sep 15 15:33 /var/hacmp/log/hacmprd_run_rcovcmd.debug


CAA-Cluster

/var/adm/ras/syslog.caa
Since PowerHA 6.1 is not based on CAA this file is only relevant for PowerHA 7.1 and higher

 

Dynamic Reconfiguration (CSPOC)

1. »cl_« versus »cli_« Commands

Most of the commands in this section are CSPOC commands found under /usr/es/sbin/cluster/sbin. They start with »cl_« followed by a well known AIX LVM command. As a word of warning - these are used by the CSPOC SMIT panels and are not intended to be directly used from the command line. However, many administrators grabbed them from SMIT via F6 and use them directly from the command line for daily business. They happen to work.

However with HACMP 5.5 SP1 IBM introduced an "official" command line interface to CSPOC. You find these commands under /usr/es/sbin/cluster/cspoc. In contrast to the CSPOC commands used by SMIT the official commands start with »cli_« (mind the i ) followed by a well known AIX LVM command. The official CLI commands have been introduced to provide a framework for batch scripts. Since these commands are intended to be used outside SMIT they may be the safer choice than the »cl_« commands.

The table below shows the most important LVM commands and its corresponding CSPOC commands:

AIX command SMIT/CSPOC command "official" CLI command
(/usr/sbin) (/usr/es/sbin/cluster/sbin) (/usr/es/sbin/cluster/cspoc)
chfs cl_chfs cli_chfs
chlv cl_chlv cli_chlv
chvg cl_chvg cli_chvg
crfs cl_crfs cli_crfs
extendlv cl_extendlv cli_extendlv
extendvg cl_extendvg cli_extendvg
mirrorvg cl_mirrorvg cli_mirrorvg
mklv cl_mklv cli_mklv
mklvcopy cl_mklvcopy cli_mklvcopy
mkvg cl_mkvg cli_mkvg
reducevg cl_reducevg cli_reducevg
rmfs cl_rmfs cli_rmfs
rmlv cl_rmlv cli_rmlv
rmlvcopy cl_rmlvcopy cli_rmlvcopy
syncvg cl_syncvg cli_syncvg
unmirrorvg cl_unmirrorvg cli_unmirrorvg

The syntax across the commands of one row in the above table is similar but not identical. For more information refer to IBM's -> PowerHA for AIX Cookbook, Chapter 7.4.6

 

2. Extend a Volume Group

One or more PVs can be added to an existing volume group. Since it is not guaranteed that the hdisks are numbered the same way across all nodes you need to specify a reference node with the "-R" switch:

nodeA# /usr/es/sbin/cluster/sbin/cl_extendvg -cspoc -n'nodeA,nodeB' -R'nodeA' VolumeGroup hdiskA hdiskB hdisk...

 

3. Reduce a Volume Group

nodeA# /usr/es/sbin/cluster/sbin/cl_reducevg -cspoc -n'nodeA,nodeB' -R'nodeA' VolumeGroup hdiskA hdiskB hdisk...

Again a reference node has to be specified.

 

4. Add a Filesystem to an Existing VG

nodeA# /usr/es/sbin/cluster/sbin/cl_mklv -cspoc -n'nodeA,nodeB' -R'nodeA' -y'LVName' -t'jfs2' -c'2' -a'e' -e'x' -u'2' -s's' VolumeGroup LPs hdiskA hdiskB

It is recommended to use the most narrow upperbound possible to keep the mirror consistant after a filesystem extension.

You could also use a map file to tell the command how to setup your LV:

nodeA# /usr/es/sbin/cluster/sbin/cl_mklv -cspoc -n'nodeA,nodeB' -R'nodeA' -y'LVName' -t'jfs2' -m MapFile VolumeGroup LPs
nodeA# /usr/es/sbin/cluster/cspoc/cli_chlv -e x -u'2' -c'2'  LVName

The format of the map file is

 hdiskA:PP1
 hdiskB:PP1
    :
 hdiskC:PP2
 hdiskD:PP2

First put in all mappings for mirror copy 1, then add all mapping for mirror copy 2. The number of entries per mirror copy has to equal the number of LPs given at the command line. Be careful!

Then we create the filessystem on top of the just created LV...

nodeA# /usr/es/sbin/cluster/sbin/cl_crfs -cspoc -n'nodeA,nodeB' -v jfs2 -d'LVName' -m'/mountpoint' -p'rw' -a agblksize='4096' -a'logname=INLINE'

The CSPOC command automatically sets mount=false in /etc/filesystems. This will mount the filesystem (in case the resource group is online).

 

5. Increase a Filesystem

If there are still enough free PPs in the existing Volume group the filesystem can be extended with a standard AIX command:

nodeA# chfs -a size=512G /mountpoint

Be sure that superstrictness is set as well as that the upper bound is set correctly.

In case you need to add new disks to the VG (->1. Extend a Volume Group) in order to be able to extend the filesystem you need to extend the underlying LV first:

nodeA# /usr/es/sbin/cluster/sbin/cl_extendlv -R'node' -u'8' -m'MapFile' LVName LPs
nodeA# chfs -a size=512G /mountpoint

The upperbound has to be adapted to the new number of PVs. The map file must only contain the additional mappings. You might need to increase the maximum number of LPs for the LV first:

nodeA# /usr/es/sbin/cluster/sbin/cl_chlv -x'2048' LVName

 

6. Mirror a Logical Volume

# /usr/es/sbin/cluster/sbin/cl_mklvcopy -R'NODE' -e'x' -u'1' -s's' LVName 2 hdiskA hdiskB hdisk...

Since it is not guaranteed that the hdisks are numbered the same way across all nodes you need to specify a reference node. You only need to set Superstrictness and upperbound if not already set.

It is also possible to use a mapfile¹ to control the exact mirror location:

# /usr/es/sbin/cluster/sbin/cl_mklvcopy -m'/root/LVNAME.map' LVName 2

With map files a reference node cannot be specified. So be sure you work on the right node!


¹ If you don't know how to create a mapfile check out the AIX FAQ.

 

7. Remove a Logical Volume Mirror

# /usr/es/sbin/cluster/sbin/cl_rmlvcopy -R'NODE' LVName 1 hdiskA hdiskB hdisk...

Again a reference node has to be specified.

 

8. Synchronize a Logical Volume Mirror

# /usr/es/sbin/cluster/cspoc/cli_syncvg -P 4 -v VGNAME

In the above command the the optional switch »-P 4« is used to synchronize 4 LPs in parallel.

 

9. Remove a Logical Volume

# /usr/es/sbin/cluster/sbin/cl_rmlv 'LVName'

Before you can use this command the Logical Volume has to be brought into closed state, i.e. the filesystem has to be unmounted.

 

10. Move a Logical Volume between PVs

The table of ->Cluster LVM commands  above is missing the command migratepv. How can I move a LV from one PV to another with CSPOC commands?

  1. Create an additional mirror copy on the target disk
    /usr/es/sbin/cluster/sbin/cl_mklvcopy -R'NODE' -u'1' -s's' LV 3 TargetPV
  2. Synchromize the mirror...
     /usr/es/sbin/cluster/cspoc/cli_syncvg -P 4 -v VG
  3. Remove the mirror copy from the source disk
    /usr/es/sbin/cluster/sbin/cl_rmlvcopy -R'NODE' LV 2 SourcePV
  4. Repeat step 1 to 3 for the second mirror copy.


¹ Here we assume that our LV only resides on one PV/mirror. For other layouts the value to '-u' has to be increased to the right number of PVs.
² The option '-P 4' to cli_syncvg  is used to synchronize 4 PPs in parallel. You can increase or reduce the value depending on the system load. Valid values are between 1 and 32.

 

Resource Group Management

1. List all Defined Resource Groups

# /usr/es/sbin/cluster/utilities/cllsgrp
RG1
RG2

 

2. Move a Resource Group to another Node

# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -m

It is also possible to move multiple resource groups in one go:

# /usr/es/sbin/cluster/utilities/clRGmove -g "RG1,RG2,RG3" -n NODE -m

 

3. Bring a Resource Group Down

# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -d

Multiple resource groups can be concatenated with commas: -g "RG1,RG2,RG3"

 

4. Bring a Resource Group Up

# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -u

Multiple resource groups can be concatenated with commas: -g "RG1,RG2,RG3"

 

Cluster Information

1. Where Can I Find the Log Files?

Historically the cluster's main log file could be found under "/tmp/hacmp.out". But nowadays the location is configurable. If you don't know where to look run

# /usr/es/sbin/cluster/utilities/cllistlogs
/var/hacmp/log/hacmp.out
/var/hacmp/log/hacmp.out.1
/var/hacmp/log/hacmp.out.2

 

2. Where Can I Finde the Application Start/Stop Scripts?

# /usr/es/sbin/cluster/utilities/cllsserv
AppSrv1     /etc/cluster/start_appsrv1     /etc/cluster/stop_appsrv1
AppSrv2     /etc/cluster/start_appsrv2     /etc/cluster/stop_appsrv2

 

3. Show the Configuration of a Particular Resource Group

# /usr/es/sbin/cluster/utilities/clshowres -g RG

If you are interested in the configuration of all resource groups you can use clshowres without any option:

# /usr/es/sbin/cluster/utilities/clshowres

 

4. Cluster IP Configuration

# /usr/es/sbin/cluster/utilities/cllsif

 

Cluster State

1. Cluster State

The most widely known tool to check the cluster state is probably clstat:

# /usr/es/sbin/cluster/clstat -a

The switch "-a" forces clstat to run in terminal mode rather to than open an X window.

The same information can be obtained with

# /usr/es/sbin/cluster/utilities/cldump

clstat and cldump rely on snmp to work.

 

2. The Cluster Manager

If for whatever reason snmp does not allow you to use clstat or cldump you can still ask the cluster manager about the state of your cluster:

# lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = "@(#)36    1.135.1.101 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 53haes_r610 11/16/10 06:18:14"
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0     ml_idx[2]=1    
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 11
local node vrmf is 6103
cluster fix level is "3"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 1  NodeName - barney
    PgSpFree = 16743822  PvPctBusy = 3  PctTotalTimeIdle = 90.121875
DNP Values for NodeId - 2  NodeName - betty
    PgSpFree = 16746872  PvPctBusy = 0  PctTotalTimeIdle = 97.221894

The most important informations are marked red.

 

3. Where are the Resources Currently Active?

# /usr/es/sbin/cluster/utilities/clRGinfo
-----------------------------------------------------------------------------
Group Name     Group State                  Node           
-----------------------------------------------------------------------------
RES_GRP_01     ONLINE                       barney       
               OFFLINE                      betty       

RES_GRP_02     ONLINE                       betty       
               OFFLINE                      barney        

 

New Commands with PowerHA 7

Starting with PowerHA 7.1 IBM finally did the switch to CAA clusters. This brought us a new command:  »clmgr«. With this command we can not only display information ("query") but also change parameters ("modify") of a CAA cluster¹.

1. List Repository Disks

# /usr/es/sbin/cluster/utilities/clmgr query cluster | grep REPOSITORIES
REPOSITORIES="hdisk11 (00f8cbdcbaab7db6), hdisk10 (00f8cbdcbaab7e99)"

 

2. Show HACMP Networks

# /usr/es/sbin/cluster/utilities/clmgr query networks
net_ether_01
net_ether_02

 

3. Show Information about a Particular Network

(here: about the network  net_ether_01 from the above output)

# /usr/es/sbin/cluster/utilities/clmgr query network net_ether_01

 

4. How to Change a Network Parameter

In the below example the value of the parameter RESOURCE_DIST_PREF will be set to NOALI ( -> How to Change the Order of IP Aliases)

# /usr/es/sbin/cluster/utilities/clmgr modify network net_ether_01 RESOURCE_DIST_PREF=NOALI


¹ CAA = Cluster Aware AIX