Installing a TSM Server v 7.1.1 in a PowerHA Cluster
This is a log of the installation of a TSM server version 7.1.1 within a PowerHA 7 cluster
Contents
- Preparing File Systems for TSM
- Creating Users and Groups
- Extending file systems in the rootvg
- Activating Completion Ports (IOCP)
- Installing the TSM Software
- Configuring the server instance
- Setting up the TSM Server
- Installation on the 2nd Node
- Integrating the TSM server into PowerHA
- Installing the TSM Fixpack
Steps 1 to 7 above have to be done on the primary node first.
Please Mind the Prompt
Please keep in mind that some steps have to be done as root user while others have to be done as instance owner. In this article commands to be issued as root are preceded by a hash(#) while commands to be issued by the instance owner are preceded by a dollar sign ($). The instance owner in our examples is the user tsminst1.
1. Preparing File Systems for TSM
Two volume groups with the below logical volumes and file systems have to be prepared for the TSM server::
tsmsrv_vg:
PP Size: 64MB
tsm_home jfs2 32 64 2 open/syncd /server/tsm/home tsm_db jfs2 352 704 2 open/syncd /server/tsm/db tsm_arch jfs2 128 256 2 open/syncd /server/tsm/archive tsm_log jfs2 384 768 2 open/syncd /server/tsm/active
and
tsmdisk_vg:
PP Size: 64MB
tsm_dpool jfs2 3000 6000 2 open/syncd /server/tsm/dsk
2. Creating Users and Groups
Creating the tsmsrvrs group:
# mkgroup -A id=1001 tsmsrvrs
Creating the tsminst1 user:
# mkuser id=1002 \ pgrp=tsmsrvrs \ groups=staff,tsmsrvs \ home=/server/tsm/home/tsminst1 \ shell=/bin/ksh \ gecos='TSM SRV System Administrator' \ umask=002 \ tsminst1
Some limits have to be adapted:
# chuser fsize=-1 tsminst1
The below file systems must be owned by the TSM user:
# chown tsminst1.tsmsrvrs /server/tsm/home/tsminst1 # chown tsminst1.tsmsrvrs /server/tsm/active # chown tsminst1.tsmsrvrs /server/tsm/archive # chown tsminst1.tsmsrvrs /server/tsm/db # chown tsminst1.tsmsrvrs /server/tsm/dsk
3. Extending file systems in the rootvg
# chfs -a size=8G /opt # chfs -a size=3G /tmp # chfs -a size=5G /usr # chfs -a size=2G /var
4. Activating I/O Completion Ports (IOCP)
# mkdev -l iocp0
5. Installing the TSM Software
First we need to unpack the TSM server archive:
# cd /path/to/package # ./TSM_711_AIX_AGT_ML.bin
Please note the memory requirement of 16GB. If your system has less the installation might fail.
After unpacking you will find a script 'install.sh' in the current directory. We run it:
# ./install.sh -c
During installation we have to answer a couple of questions. Most of them suggest themselves. Most important...
Software Selection:
Select packages to install: 1 . [X] IBM® Installation Manager 1.7 . 2 2 . [X] IBM Tivoli Storage Manager server 7.1 . 1 .20140829_1116 3 . [X] IBM Tivoli Storage Manager languages 7.1 . 1 .20140829_1110 4 . [X] IBM Tivoli Storage Manager license 7.1 . 1 .20140829_1109 5 . [ ] IBM Tivoli Storage Manager storage agent 7.1 . 1 .20140829_1109 6 . [X] IBM Tivoli Storage Manager device driver 7.1 . 1 .20140829_1113 7 . [ ] IBM Tivoli Storage Manager Operations Center 7.1 . 1000 .20140822_1907 |
Software Product:
We select the "Extended Edition"
TSM requires the Atape device driver. We copy the driver onto the TSM server (lets say to a directory called '/migration') and install it.
# cd /migration/ # ls -l Atape* -rwxr--r-- 1 root goslogin 4198400 Dec 10 17:36 Atape.12.8.4.0.bin # inutoc . # installp -d. -agXY Atape.driver
We reboot the server.
6. Configuring the server instance
Creating the TSM Instance
# /opt/tivoli/tsm/db2/instance/db2icrt -a SERVER -u tsminst1 tsminst1
Checking the configuration...
# su - tsminst1 $ db2 get dbm cfg |grep -i dftdbpath Default database path (DFTDBPATH) = /server/tsm/home/tsminst1
In case the DFTDBPATH is not correct we can set it with the command
$ db2 update dbm cfg using dftdbpath /server/tsm/home/tsminst1
Creating dsmserv.opt
$ vi /server/tsm/home/tsminst1/dsmserv.opt ACTIVELOGDirectory /server/tsm/active ARCHLOGDirectory /server/tsm/archive DEVCONFig devconfig TCPPort 1500 VOLUMEHistory volhist BACKUPINITIATIONROOT NO
The option "BACKUPINITIATIONROOT NO" ist required for database backups via TDP (=> ANR1890W).
Starting DB2
$ db2start
Formatting the DB
$ dsmserv format dbdir=/server/tsm/db activelogdir=/server/tsm/active archlogdir=/server/tsm/archive activelogsize=8192
In case the formatting procedure fails for some reason we need to remove all we created before:
$ db2stop $ dsmserv removedb TSMDB1 $ dbidrop tsminst1 $ rm -rf /server/tsm/archive/tsminst1 $ rm -rf /server/tsm/active/NODE0000
and we need to redo all steps from this chapter.
Creating userprofile and tsmdbmgr.opt
$ vi /server/tsm/home/tsminst1/sqllib/userprofile export DSMI_CONFIG=/server/tsm/home/tsminst1/tsminst1/tsmdbmgr.opt export DSMI_DIR=/usr/tivoli/tsm/client/api/bin64 export DSMI_LOG=/server/tsm/home/tsminst1/tsminst1
and
$ vi /server/tsm/home/tsminst1/tsminst1/tsmdbmgr.opt SERVERNAME TSMDBMGR_TSMINST1
Editing dsm.opt and dsm.sys
The following commands have to be issued with root authorization.
# vi /usr/tivoli/tsm/client/api/bin64/dsm.opt SErvername MYTSMSERVER compressalways no QUIET
and
# vi /usr/tivoli/tsm/client/api/bin64/dsm.sys ************************************************************************ SErvername MYTSMSERVER COMMMethod TCPip TCPPort 1500 TCPServeraddress tsmserver01 ************************************************************************ servername TSMDBMGR_TSMINST1 TCPServeraddress localhost commmethod tcpip tcpport 1500 * passwordaccess generate passworddir /server/tsm/home/tsminst1 errorlogname /server/tsm/home/tsminst1/tsmdbmgr.log nodename $$_TSMDBMGR_$$
Servername (here: "MYTSMSERVER") is a symbolic name - however the server address (here: "tsmserver01") must be a resolvable hostname or an IP address. It should be a service address in PowerHA connected to the TSM server resource group.
If you have trouble backing up the TSM DB try to comment or remove the option "passwordaccess generate".
Setting up some links
# rm -f /usr/tivoli/tsm/client/ba/bin64/dsm.sys # ln -s /usr/tivoli/tsm/client/api/bin64/dsm.sys /usr/tivoli/tsm/client/ba/bin64/dsm.sys # rm -f /usr/tivoli/tsm/client/ba/bin64/dsm.opt # ln -s /usr/tivoli/tsm/client/api/bin64/dsm.opt /usr/tivoli/tsm/client/ba/bin64/dsm.opt # mv /server/tsm/home/tsminst1/sqllib/db2nodes.cfg /etc/ # ln -sf /etc/db2nodes.cfg /server/tsm/home/tsminst1/sqllib/db2nodes.cfg
7. Setting up the TSM Server
Starting the TSM Server
First we have to start the TSM server. This has to be done as instance owner.
$ . /server/tsm/home/tsminst1/sqllib/db2profile $ cd ~ $ db2start $ /opt/tivoli/tsm/server/bin/dsmserv
With the last command we start the TSM Console in the current window. We will come back to the console later in this chapter.
Setting the TSM API password
# . /server/tsm/home/tsminst1/sqllib/db2profile # /server/tsm/home/tsminst1/sqllib/adsm/dsmapipw ************************************************************* * Tivoli Storage Manager * * API Version = 6.4.1 * ************************************************************* Enter your current password: Enter your new password: Enter your new password again: Your new password has been accepted and updated.
Registering License and Administrator
This has to be done on the TSM Console:
> reg lic file=/opt/tivoli/tsm/server/bin/tsmbasic.lic > reg admin admin admin > grant auth admin cl=sy
8. Installation on the 2nd Node
All PowerHA resources (most notably the TSM file systems) stay active on the primary node. Don't start a takeover here!
Creating Users and Groups
Of course we need the same users and groups on the 2nd node (-> 2. Creating Users and Groups )
Extending file systems in the rootvg
File system size requirements in the rootvg are the same as on the primary node (-> 3. Extending file systems in the rootvg )
Activating Completion Ports (IOCP)
# mkdev -l iocp0
Installing the TSM Software
This has to be done the same way as on the primary node (-> 5. Installing the TSM Software )
/etc/services
All entries starting with 'DB2_tsminst1' have to be copied from the primary node:
DB2_tsminst1 60000/tcp DB2_tsminst1_1 60001/tcp DB2_tsminst1_2 60002/tcp DB2_tsminst1_END 60003/tcp
Copying dsm.opt and dsm.sys
# scp NODE1:/usr/tivoli/tsm/client/api/bin64/dsm.sys /usr/tivoli/tsm/client/api/bin64/ # scp NODE1:/usr/tivoli/tsm/client/api/bin64/dsm.opt /usr/tivoli/tsm/client/api/bin64/ # ln -s /usr/tivoli/tsm/client/ba/bin64 /usr/adsm # rm -f /usr/tivoli/tsm/client/ba/bin64/dsm.sys # ln -s /usr/tivoli/tsm/client/api/bin64/dsm.sys /usr/tivoli/tsm/client/ba/bin64/dsm.sys # rm -f /usr/tivoli/tsm/client/ba/bin64/dsm.opt # ln -s /usr/tivoli/tsm/client/api/bin64/dsm.opt /usr/tivoli/tsm/client/ba/bin64/dsm.opt
Adapting /etc/db2nodes.cfg
# vi /etc/db2nodes.cfg 0 tsmnode02-boot 0
We have to put in the persistent hostname of the 2nd node here. The file must be owned by the instance owner:
# chown tsminst1:tsmsrvrs /etc/db2nodes.cfg
9. Integrating the TSM server into PowerHA
Adapting startserver and stopserver
On both nodes we have to change three variables in the shell scripts startserver and stopserver:
# vi /opt/tivoli/tsm/server/bin/startserver INST_USER=tsminst1 INST_DIR=/server/tsm/home/tsminst1
and
# vi /opt/tivoli/tsm/server/bin/stopserver INSTANCE_DIR=/server/tsm/home/tsminst1
Now we can create links to the scripts under /usr/bin:
# ln -s /opt/tivoli/tsm/server/bin/startserver /usr/bin # ln -s /opt/tivoli/tsm/server/bin/stopserver /usr/bin
Adding Application Controller Scripts to the PowerHA Configuration
First we define the scripts as resources:
# smitty cm_app_scripts > Add Application Controller Scripts [Entry Fields] * Application Controller Name [AC_TSMSERVER] Start Script [/usr/bin/startserver] Stop Script [/usr/bin/stopserver] Application Monitor Name(s) Application startup mode [background]
Then we add the newly created resource as Application Controller to the RG_TSMSERVER1 resource group:
# smitty cm_change_show_rg_resources > RG_TSMSERVER [Entry Fields] Resource Group Name RG_TSMSERVER Participating Nodes (Default Node Priority) tsmnode01 tsmnode02 Startup Policy Online On Home Node Only Fallover Policy Fallover To Next Priority Node In The List Fallback Policy Never Fallback Service IP Labels/Addresses [tsmserver01] Application Controllers [AC_TSMSERVER] Volume Groups [tsmsrv_vg tsmdisk_vg] Use forced varyon of volume groups, if necessary true Automatically Import Volume Groups false Filesystems (empty is ALL for VGs specified) [ ] Filesystems Consistency Check fsck Filesystems Recovery Method sequential Filesystems mounted before IP configured false Filesystems/Directories to Export (NFSv2/3) [ ]
Now we can synchronize the cluster.
10. Installing the TSM Fixpack
Before going live with the server we should apply the latest fixpack to the TSM server. As of the time of writing the most recent fixpack is 7.1.1.300.
The installation can be done in 5 steps. Step 1 to 4 have to be done on the primary cluster node only.
# stopserver
Step 2: Applying the Fix
Unpacking the Fixpack
# cd /path/to/fix # ./7.1.1.300-TIV-TSMSRV-AIX.bin
Installing the Fix
In the current directory we find the script 'install.sh'. We run it with the '-c' option:
# ./install.sh -c
During installation we have to answer a couple of questions. Most of them suggest themselves. Most important...
Software Selection:
First we have to update the Installation Manager itself.
Select packages to install: 1. [X] IBM� Installation Manager 1.8.0
The procedure ask to restart the Installation Manager. After confirming the update procedure starts.
Options: R. Restart Installation Manager -----> [R]
We select Update:
Select: 1. Install - Install software packages 2. Update - Find and install updates and fixes to installed software packages 3. Modify - Change installed software packages 4. Roll Back - Revert to an earlier version of installed software packages 5. Uninstall - Remove installed software packages : : : -----> 2
All required updates a.re already pre-selected
=====> IBM Installation Manager> Update> Packages Package group: IBM Tivoli Storage Manager Update packages: 1-. [X] IBM Tivoli Storage Manager server 7.1.1.20140829_1116 2. [X] Version 7.1.1.20150529_1101 3. [ ] IBM Tivoli Storage Manager languages 7.1.1.20140829_1110 4. [ ] IBM Tivoli Storage Manager license 7.1.1.20140829_1109 5-. [X] IBM Tivoli Storage Manager device driver 7.1.1.20140829_1113 6. [X] Version 7.1.1.20150529_1058
Step 3: Adjusting Start/Stopscripts
The update procedure replaces the two scripts startserver and stopserver. We restore them by changing the variables again:
# vi /opt/tivoli/tsm/server/bin/startserver INST_USER=tsminst1 INST_DIR=/server/tsm/home/tsminst1
and
# vi /opt/tivoli/tsm/server/bin/stopserver INSTANCE_DIR=/server/tsm/home/tsminst1
Step 4: Starting the TSM Server
# startserver
Step 5: Installation on the 2nd Node
The resources are still active on the primary node. We keep it this way and don't switch the resources...
..and perform the below two steps:
- Installing the Fix
We do it the same way as we did on the primary node(-> Step 2: Applying the Fix )
- Adjusting Start/Stopscripts
The same three variables have to be changed again (-> Step 3: Adjusting Start/Stopscripts )
Related information