Tech Blog

These are blog entries written by the UNIX Health Check development team. Our team has extensive technical experience on both AIX and Red Hat systems, and we like to share our knowledge with our visitors.

Topics: Backup & restore, Spectrum Protect

IBM Spectrum Protect/TSM Links

Official IBM Spectrum Protect / Tivoli Storage Manager sites:

Other TSM related sites: IBM Spectrum Protect

Topics: Backup & restore, Spectrum Protect

What did the TSM admins do?

Want to know what all TSM administrators did in the last 24 hours in TSM?

# dsmadmc -comma -id=readonly -password=readonly query actlog s=ANR2017I begind=-1 begint=09:00 endd=today endt=09:00 | grep -v -i readonly | grep -v -i ibm-oc-server1
This assumes you have an administrator account configured, known as readonly with password readonly, which has no privileges.

This command will show you all administrator actions from 9 AM yesterday until 9 AM today.

In case you wish to create a readonly user within TSM, run the following command:
register admin readonly readonly contact="Readonly account'
There is no need to grant any authority to the readonly account. By just creating the readonly account, this account can perform read actions, such as querying the activity log, and can not make any changes.

Topics: Backup & restore, Spectrum Protect

Tape library commands

How many times can the tape drives be cleaned?

# mtlib -l /dev/lmcp0 -qL
Look for "avail xxxx cleaner cycles" at the bottom.

Which cleaning tapes are in the library?
# mtlib -l /dev/lmcp0 -qC -s FFFD
The first column in the output is the volume serial number of the cleaning tapes.

When was the cleaning tape last used?
# mtlib -l /dev/lmcp0 -qE -V [tape-volume-serial-number] -u
Look for "last used" at the bottom of the output.

How are my tape drives doing (from a TSM viewpoint)?
# dsmadmc -c comma -id=readonly -password=readonly q dr f=d
Look for "On-Line" and "Drive State" in the output. Also check if the paths to your tape drives are on-line.
# query path

Topics: Networking, Red Hat / Linux

Enabling bonding in Linux

To enable "etherchannel" or "bonding" in Linux nomenclature:

  • Add these two lines to /etc/modprobe.conf:
    alias bond0 bonding
    options bond0 miimon=100 mode=1 primary=eth0
    Entry "mode=1" simply means active/standby. Entry "miimon" is the number in milliseconds to wait before determining a link dead (Change eth0 to match your primary device, if it is different. Blades sometimes have eth4 as the primary device).
  • In /etc/sysconfig/network-scripts create ifcfg-bond0 with the following (of course, change the network info to match your own):
    DEVICE=bond0
    BROADCAST=10.250.19.255
    IPADDR=10.250.19.194
    NETMASK=255.255.255.0
    GATEWAY=10.250.19.1
    ONBOOT=yes
    BOOTPROTO=none
  • Change ifcfg-eth0 and ifcfg-eth1 (or whatever they are) to resemble this:
    DEVICE=eth0
    HWADDR=00:22:64:9B:54:9C
    USERCTL=no
    ONBOOT=yes
    MASTER=bond0
    SLAVE=yes
    BOOTPPROTO=none
    Leave the value of HWADDR to whatever it is in your file. This is important. It is this devices MAC Address.
  • Run /etc/init.d/network restart. You will want to do at least this part from the console, in case something goes wrong.
  • Once you get your "OK" and the prompt comes back, do an ifconfig -a. You should see bond0.
  • Make sure you can ping your default gateway. After that, all should be good.
Note: When making back up copies of the ifcfg-* files, you must either move the backup files out of this directory or change your backup copy strategy for these files. The primary network script that reads these files, basically runs: ls ifcg-*. It then creates an interface based on the part after the dash ("-"). So if you run, for example:
# cp ifcfg-eth0 ifcfg-eth0.bak
You will end up with an alias device of eth0 called eth0.bak. Instead do this:
# cp ifcfg-eth0 bak.$(date +%Y%m%d).ifcfg-eth0
That foils the configuration script and allows to keep backup/backout copies in the same directory with the working copies.

Topics: IBM Content Manager

ICM Compliance Monitoring Capability

You might be familiar with the change in licensing conditions with CM V8 whereby a concurrent user is defined as someone who has initiated a Library Server request within the past 30 minutes. Customers are required to have a license for the maximum number of such users, but it is very difficult for them to monitor complicance with this condition. CM V8.2 goes some way towards providing this compliance monitoring capability with the new Library Servier monitor task.

Every 30 minutes this task will write a record to the Library Server SysAdmin Event Log table (ICMSTSYSADMEVENTS) recording the number of concurrent users at that point in time. An appropriate SQL query would be able to work out the maximum figure recorded for any day, week month or other period. Even though this is only a snapshot every 30 minutes, it is considerably better than what we have today.

To activate this function, you need to have tracing switched on. In the System Administration Client, go to ICMNLSDB, Library Server Parameters, Contigurations, Library Server Configuration, Properties. In the Definition tab, set Max users to a very high number (it is usually 0 to allow any number of users logged on). On the Log and trace tab, set the Trace Level to Basic trace.

You should then see an entry made in the ICMADMIN.ICMSYSADMEVENTS table every 30 minutes for record type 209 which will look as follows:

db2> select eventcode,created,userid,eventdata1 from icmstsysadmevents where eventcode = 209

EVENTCODE CREATED                    USERID   EVENTDATA1
      209 2005-01-05-09.13.36.143234 ICMADMIN 2
This record does not list the users - it merely provides a count of those users who have performed some interaction with the Library Server within the previous 30 minutes. If you don't want to wait 30 minutes, set an environment variable ICMCOUNTINTERVAL to some number of seconds, the start ICMPLSAP from the command line.

Be careful with this trace: When turning on this trace on a CM 8.2 system, it might impact the federated searches on CM 8.2 and CMOD 7.1.1 in such a manner, that eventually CMOD hangs. This situation was seen with CM 8.2 system on fix level 3: Federated searches hanged and caused the ARS processes on the AIX server to hang also; eventually using up all available ARS database processes, and thus disabling federated searches. After turning off tracing and rebooting the AIX system afterwards, the problem will be gone.

Topics: Backup & restore, IBM Content Manager, Spectrum Protect

Backup tips for IBM Content Manager

Creating consistent backups is probably one of the most difficult things to do with IBM Content Manager (ICM). The databases function as a reference to all content stored on disk or on TSM archive. Here are some options on how to create consistent backups.

1. Offline

This is a very easy way: take the complete system down and do a backup. This is a fair solution if you have a rather small amount of documents stored in ICM.

2. Offline, but parallel

This is the same as the first option, but this time you'll backup the databases and content data in parallel. This will save some time.

3. Offline, parallel and improved

This is the same as option 2, but with a faster Resource Manager (RM) content backup. Backing up a lot of RM data can still take a very long time. E.g., backing up 10 million files can take 8 hours to complete, which was might be too long. To improve the RM content backup, create two separate sessions for the RM content backup: a find process, that takes care of finding all directories in the RM content file system, which pipes the output to the second process: the backup process, that started a backup session per directory found. Normal backups with TSM would do a scan of the file system first (find) and then backup all data that is changed. By using this pipe, these two functions will act in parallel. You should limit the maximum amount of sessions to the TSM server to 20; otherwise you might overload the TSM server with sessions. With normal TSM performance, backing up 10 million files suddenly changes from 8 hours to 1 hour!

Only drawback about this option is, that TSM doesn't recognize this directory-type backup as a complete file system backup, and will therefor report the filesystem as not having backed up completely. So you'll need to create all kinds of checks in the backup script in order to check, if everything went ok (if all directories have succesfully backed up).

4. Offline, but with read-only RM content backup

This option is only feasible, if you can backup your RM content read-only. This option will backup your databases offline, but after the database backup is complete, the system is started again and the RM content file system will be set to read-only. This way, users are still able to retrieve documents; when trying to store a document, they will receive an error, until also the RM content backup completes. The only down-time needed in this situation is the time for backing up the databases. This option is interesting if there are no users during evening and night-time releasing new documents to ICM. Important with this option: you'll have to keep the Migrator process down during the read-only backup, since this process might try to delete documents from the RM content. Just start the Migrator again, when the read-only RM content backup completes.

Also very important with this option: It might well be, that the database changes during the RM read-only backup. If you have to restore the system, you'll restore the last database backup and the last RM file system backup. Any changes to the database (usually only user session information and attribute data of the documents) will get lost with this restore. If that is no problem for you, you can use this option.

5. Offline, using the JFS2 snapshot

This only works on AIX and JFS2 file systems. Starting with AIX 5.2 you can use the JFS2 snapshot. This is an AIX feature, with which you create a snapshot of the JFS2 file system, called the snappedFS file system. Creating a snapshot only takes a few minutes. It creates a new file system and copies the file system meta-data from the snappedFS to snapshot file system. If, during the existence of the snapshot any changes are made to the snappedFS, than the previous blocks are copied to the snapshot. This way it will keep the snapshot of the file system as it was at the moment it was made. You can backup this snapshot, knowing that this file system never changes. After the backup, you can delete the snapshot.

If you have to restore, just restore the last offline database backup and the RM file system backup.

Important: running defragfs on file systems with snapshots, creates a lot of changed data blocks and therefor might use up a lot of available space for the snapshot. If the snapshot runs full, it will be deleted. Usually, you'll need to create a snapshot of only 3-6% of the original file systems size.

6. Online, using the JFS2 snapshot The ultimate solution: Backup the databases online, then pause ICM. Then create the JFS2 snapshot. The only downtime experienced by the users, is the moment that ICM is paused until it is resumed again. If you have to restore the system, restore the system until the point-in-time, the JFS2 snapshots are taken. This way, you'll know for sure that the database is consistent with the RM data.


Online DB2 backup and JFS2 snapshot

Other considerations for backups

There are also some other things, that are important, when backing up a content management system:

ICM gives you the possibility of configuring the system in multiple ways: You may install the Libary Server (LS) and the Resource Manager (RM) on the same system, but you may also install the RM on a completely different system, even on another Operating System:


RM and RMDB on separate systems

Having the LS and the RM on the same system is much easier method to create a consistent backup. When you have the RM on a separate system, you'll have to figure out how to create a consistent backup, which is quite difficult.

Because you're backing up different parts (databases, file systems), as a backup administrator, you'll probably have to deal with different people involved in managing the system. You'll have to keep them informed about what the backup is doing, and probably give them a way to check if the backup is active or not. E.g. Web admins might start up the applications again, without any knowledge of the backup status, thus ruining the complete backup. Communication is important here.

ICM has an utility to check the consistency of the database with the RM content: icmrmvolval. You'll have to run this utility once in a while, especially after a restore!

Document your backup solution, in the event someone else needs to restore the system, while you're away on holiday.

Do not store all the RM content in a single file system. If you go beyond 8 million files in a single file system, the index for this file system in TSM will grow beyond 2 Gigabytes, which is a process limit for AIX. If a process grows beyond 2 GB, it will core dump. You can do a memory-efficient TSM backup. This option requires a lot less memory, because it will retrieve index information per directory from TSM, somewhat like option 3. Drawback of this option is, that a memory-efficient TSM backup takes a much longer time to complete.

Topics: IBM Content Manager

Image quality in the eClient

To improve the image quality in IBM Content Manager's eClient, you should update enhance_mode in file IDM.properties from false to true. It doesn't matter if you're using the java applet or not (viewerAppletEnabled).

Once you modify IDM.properties, you do not need to restart the eClient_server, since the property daemon will check the file and update the configuration online.

If, after updating enhance_mode to true, images aren't viewable in the eClient, you might have a problem with the XVFB (if you're CM system is on UNIX).

XVFB is the Virtual Frame Buffer from the X server. It is being used to convert images to JPEG. For low quality images (conversion to GIF images), it isn't required. On an AIX system, install filesets OpenGL.OpenGL_X.dev, OpenGL.OpenGL_X.rte.base, OpenGL.OpenGL_X.rte.soft, X11.vfb and X11.samples.

To start the X server, put the following entry in the file /etc/inittab:

xvfb:2:respawn:/usr/bin/X11/X -force -vfb -x abx -x dbe -x GLX :255 > /dev/null
Kill the X server if it is running. Then, re-read the inittab by issuing:
# init q
The X server's output is redirected to port offset 255 (real port 6255), which uses the xvfb software. Check if the X server is listening op port 6255:
# netstat -an | grep 6255 | grep LISTEN
Check if the X server is using the xvfb frame buffer:
# /usr/lpp/X11/Xamples/bin/xprop -display localhost:255 -root
Test the X server redirection using xclock:
# xclock -display localhost:255 &
Retrieve the handle for the output of the X server:
# xwininfo -root -tree -display localhost:255
Start a Xsession on your PC (via Exceed, WinAxe, or whatever you like). Then snap the xclock screen from the frame buffer and send this static picture to the Xsession of the PC:
# xwd -id 0x400009 -display localhost:255 | xwud -display [your-ip-address:0.0]
A picture of the xclock should be shown on the Windows desktop of your PC.

Now, test the image quality in the eClient again. Still no images in your eClient? You might want to try to export the DISPLAY before starting the eClient_Server:
# export DISPLAY=localhost:255

Topics: IBM Content Manager

Cataloging a Content Manager database on a Windows client

If you wish to catalog a Content Manager database on a Windows client, for example if you have 2 Content Manager environments, each with a ICMNLSDB database, then run the following command from a DB2 command windows:

"C:\Program Files\IBM\Cmgmt\cmbdbcat81" AIX CAT ICMNLSDB [database alias name] DB2INST1 [fully qualified domain name] 50000
In the example above, the database ICMNLSDB on an AIX environment is catalogged as [database alias name].

To remove a catalogged database, you can use the same command with the UNCAT parameter. Check the contents of cmbdbcat81 to view the command parameters.

To add an extra Content Manager to the System Adminstrative client, you need (besides catalogging the database) to enter an extra stanza in the cmbicmsrvs.ini.

Topics: IBM Content Manager

No java processes after starting the Resource Manager

When you start the Resource Manager, you should see 3 java processes, when you run db2 list applications. If you don't see these processes, you probably forgot to set the DB2 environment, before starting the Resource Manager. Run BEFORE starting the Resource Manager:

. ./home/db2inst1/sqllib/db2profile

Topics: DB2, IBM Content Manager

DB2 catalog for databases in different instances

If the databases of the Libary Server and Resource Manager are installed in different DB2 instances, but on the same server host, you should catalog the Library Server database within the instance of the Resource Manager, to enable the Resource Manager to access the Library Server database. The Resource Manager needs to access the Libary Server database to validate any security tokens, the Library Server creates when people import or retreive any documents from the Resource Manager.

In the next procedure, the following texts need to be updated to your own situation:

  • instance_name_resource_manager -> Name of the instance of the Resource Manager.
  • hostname -> The short hostname of your server.
  • resource_manager_database -> The name of the Resource Manager database.
  • resource_manager_instance_owner -> The owner/userid of the Resource Manager instance.
  • library_server_database -> The name of the Library Server database.
  • instance_name_library_server -> Name of the instance of the Library Server.
  • library_server_instance_owner -> The owner/userid of the Library Server instance.
  • Log in as the database instance owner of the Resource Manager.
  • Open or create the sqllib/profile.env file and add the following lines, to enable TCP/IP communication and the automatic restart:
    DB2ENVLIST='EXTSHM'
    DB2COMM='tcpip'
    DB2AUTOSTART='TRUE'
  • Open or create the sqllib/userprofile file and add the following lines:
    ICMROOT=/usr/lpp/icm
    ICMDLL=/home/db2fenc1
    ICMCOMP=/usr/vacpp/bin
    CMCOMMON=/usr/lpp/cmb/cmgmt
    EXTSHM=ON
    PATH=$PATH:$ICMROOT/bin/DB2
    LIBPATH=$ICMROOT/lib:$ICMROOT/inso:$LIBPATH
    export ICMROOT ICMDLL ICMCOMP CMCOMMON EXTSHM PATH LIBPATH
  • Log off.
  • Log in as the database instance owner of the Library Server.
  • Open or create the sqllib/profile.env file and add the following lines, to enable TCP/IP communication and the automatic restart:
    DB2ENVLIST='LIBPATH IMCROOT ICMDLL ICMCOMP EXTSHM CMCOMMON'
    DB2COMM='tcpip'
    DB2AUTOSTART='TRUE'
  • Open or create the sqllib/userprofile file and add the following lines:
    ICMROOT=/usr/lpp/icm
    ICMDLL=/home/db2fenc1
    ICMCOMP=/usr/vacpp/bin
    CMCOMMON=/usr/lpp/cmb/cmgmt
    EXTSHM=ON
    PATH=$PATH:$ICMROOT/bin/DB
    LIBPATH=$ICMROOT/lib:$ICMROOT/inso:$LIBPATH
    export ICMROOT ICMDLL ICMCOMP CMCOMMON EXTSHM PATH LIBPATH
  • Catalog the Resource Manager database instance:
    db2 catalog local node instance_name instance instance_name system ostype aix
  • Refresh the database directory cache:
    db2 terminate
  • Catalog the Resource Manager database in the system database directory:
    db2 catalog db resource_manager_database at node instance_name_resource_manager
  • Refresh the database directory cache:
    db2 terminate
  • Try to connect to the Resource Manager database:
    db2 connect to resource_manager_database user resource_manager_instance_owner
  • Disconnect the connection:
    db2 terminate
  • Log off.
In the same way as the Library Server connects to the Resource Manager database, the Resource Manager needs access to the Library Server database.
Perform te following steps on the Resource Manager machine with the Resource Manager instance owner:
  • Log in as the Resource Manager instance owner.
  • Catalog the Resource Manager database instance:
    db2 catalog local node instance_name_resource_manager instance instance_name_resource_manager system ostype aix
  • Catalog the Library Server database in the system database directory:
    db2 catalog db library_server_database at node instance_name_library_server
  • Refresh the database directory cache:
    db2 terminate
  • Try to connect to the Library Server database:
    db2 connect to library_server-database user library_serveR_instance_owner
  • Disconnect the connection:
    db2 terminate
  • Log off.
  • Log in as the Library Server instance owner.
  • db2 catalog tcpip node CMCDB2 remote hostname server db2c_library_server_instance
  • Log off.
  • Log in as the Resource Manager instance owner.
  • db2 catalog tcpip node CMCDB2 remote hostname server db2c_resource_manager_instance
  • Log off.
  • Log in as root and modify /etc/services:
    db2c_library_server_instance 50000/tcp
    db2c_resource_manager_instance 50001/tcp
Check if you can find both databases. Log in as the resource manager instance owner. Run:
db2 list database directory
Then connect to both databases.

Number of results found: 470.
Displaying results: 381 - 390.