There is a LED which you can turn on to identify a device, which can be useful if you need to replace a device. It's the same binary as being used by diag.
To show the syntax:
# /usr/lpp/diagnostics/bin/usysident ?
usage: usysident [-s {normal | identify}]
[-l location code | -d device name]
usysident [-t]
To check the LED status of the system:
To check the LED status of /dev/hdisk1:# /usr/lpp/diagnostics/bin/usysident normal
To activate the LED of /dev/hdisk1:# /usr/lpp/diagnostics/bin/usysident -d hdisk1 normal
To turn of the LED again of /dev/hdisk1:# /usr/lpp/diagnostics/bin/usysident -s identify -d hdisk1 # /usr/lpp/diagnostics/bin/usysident -d hdisk1 identify
Keep in mind that activating the LED of a particular device does not activate the LED of the system panel. You can achieve that if you omit the device parameter.# /usr/lpp/diagnostics/bin/usysident -s normal -d hdisk1 # /usr/lpp/diagnostics/bin/usysident -d hdisk1 normal
Topics: AIX, LVM, System Admin↑
Renaming disk devices
Getting disk devices named the same way on, for example, 2 nodes of a PowerHA cluster, can be really difficult. For us humans though, it's very useful to have the disks named the same way on all nodes, so we can recognize the disks a lot faster, and don't have to worry about picking the wrong disk.
The way to get around this usually involved either creating dummy disk devices or running configuration manager on a specific adapter, like: cfgmgr -vl fcs0.
This complicated procedure is not needed any more since AIX 7.1 and AIX 6.1 TL6, because a new command has been made available, called rendev, which is very easy to use for renaming devices:
# lspv hdisk0 00c8b12ce3c7d496 rootvg active hdisk1 00c8b12cf28e737b None # rendev -l hdisk1 -n hdisk99 # lspv hdisk0 00c8b12ce3c7d496 rootvg active hdisk99 00c8b12cf28e737b None
Topics: AIX, Backup & restore, System Admin↑
Lsmksysb
There's a simple command to list information about a mksysb image, called lsmksysb:
# lsmksysb -lf mksysb.image VOLUME GROUP: rootvg BACKUP DATE/TIME: Mon Jun 6 04:00:06 MST 2011 UNAME INFO: AIX testaix1 1 6 0008CB1A4C00 BACKUP OSLEVEL: 6.1.6.0 MAINTENANCE LEVEL: 6100-06 BACKUP SIZE (MB): 49920 SHRINK SIZE (MB): 17377 VG DATA ONLY: no rootvg: LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT hd5 boot 1 2 2 closed/syncd N/A hd6 paging 32 64 2 open/syncd N/A hd8 jfs2log 1 2 2 open/syncd N/A hd4 jfs2 8 16 2 open/syncd / hd2 jfs2 40 80 2 open/syncd /usr hd9var jfs2 40 80 2 open/syncd /var hd3 jfs2 40 80 2 open/syncd /tmp hd1 jfs2 8 16 2 open/syncd /home hd10opt jfs2 8 16 2 open/syncd /opt dumplv1 sysdump 16 16 1 open/syncd N/A dumplv2 sysdump 16 16 1 open/syncd N/A hd11admin jfs2 1 2 2 open/syncd /admin
The VG type, commonly known as standard or normal, allows a maximum of 32 physical volumes (PVs). A standard or normal VG is no more than 1016 physical partitions (PPs) per PV and has an upper limit of 256 logical volumes (LVs) per VG. Subsequently, a new VG type was introduced which was referred to as big VG. A big VG allows up to 128 PVs and a maximum of 512 LVs.
AIX 5L Version 5.3 has introduced a new VG type called scalable volume group (scalable VG). A scalable VG allows a maximum of 1024 PVs and 4096 LVs. The maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis. This opens up the prospect of configuring VGs with a relatively small number of disks and fine-grained storage allocation options through a large number of PPs, which are small in size. The scalable VG can hold up to 2,097,152 (2048 K) PPs. As with the older VG types, the size is specified in units of megabytes and the size variable must be equal to a power of 2. The range of PP sizes starts at 1 (1 MB) and goes up to 131,072 (128 GB). This is more than two orders of magnitude above the 1024 (1 GB), which is the maximum for both normal and big VG types in AIX 5L Version 5.2. The new maximum PP size provides an architectural support for 256 petabyte disks.
The table below shows the variation of configuration limits with different VG types. Note that the maximum number of user definable LVs is given by the maximum number of LVs per VG minus 1 because one LV is reserved for system use. Consequently, system administrators can configure 255 LVs in normal VGs, 511 in big VGs, and 4095 in scalable VGs.
| VG type | Max PVs | Max LVs | Max PPs per VG | Max PP size |
| Normal VG | 32 | 256 | 32,512 (1016 * 32) | 1 GB |
| Big VG | 128 | 512 | 130,048 (1016 * 128) | 1 GB |
| Scalable VG | 1024 | 4096 | 2,097,152 | 128 GB |
The scalable VG implementation in AIX 5L Version 5.3 provides configuration flexibility with respect to the number of PVs and LVs that can be accommodated by a given instance of the new VG type. The configuration options allow any scalable VG to contain 32, 64, 128, 256, 512, 768, or 1024 disks and 256, 512, 1024, 2048, or 4096 LVs. You do not need to configure the maximum values of 1024 PVs and 4096 LVs at the time of VG creation to account for potential future growth. You can always increase the initial settings at a later date as required.
The System Management Interface Tool (SMIT) and the Web-based System Manager graphical user interface fully support the scalable VG. Existing SMIT panels, which are related to VG management tasks, have been changed and many new panels added to account for the scalable VG type. For example, you can use the new SMIT fast path _mksvg to directly access the Add a Scalable VG SMIT menu.
The user commands mkvg, chvg, and lsvg have been enhanced in support of the scalable VG type.
For more information:
http://www.ibm.com/developerworks/aix/library/au-aix5l-lvm.html.
This describes how to resolve the following error when setting the bootlist:
To resolve this: clear the boot logical volumes from the disks:# bootlist -m normal hdisk2 hdisk3 0514-229 bootlist: Multiple boot logical volumes found on 'hdisk2'. Use the 'blv' attribute to specify the one from which to boot.
Verify that the disks can no longer be used to boot from by running:# chpv -c hdisk2 # chpv -c hdisk3
# ipl_varyon -iThen re-run bosboot on both disks:
Finally, set the bootlist again:# bosboot -ad /dev/hdisk2 bosboot: Boot image is 38224 512 byte blocks. # bosboot -ad /dev/hdisk3 bosboot: Boot image is 38224 512 byte blocks.
# bootlist -m normal hdisk2 hdisk3Another way around it is by specifying hd5 using the blv attribute:
# bootlist -m normal hdisk2 blv=hd5 hdisk3 blv=hd5This will set the correct boot logical volume, but the error will show up if you ever run the bootlist command again without the blv attribute.
Topics: AIX, LVM, System Admin↑
File system creation time
To determine the time and date a file system was created, you can use the getlvcb command. First, figure out what the logical volume is that is used for a partical file system, for example, if you want to know for the /opt file system:
So file system /opt is located on logical volume hd10opt. Then run the getlvcb command:# lsfs /opt Name Nodename Mount Pt VFS Size Options Auto Accounting /dev/hd10opt -- /opt jfs2 4194304 -- yes no
You can clearly see the "time created" for this file system in the example above.# getlvcb -AT hd10opt AIX LVCB intrapolicy = c copies = 2 interpolicy = m lvid = 00f69a1100004c000000012f9dca819a.9 lvname = hd10opt label = /opt machine id = 69A114C00 number lps = 8 relocatable = y strict = y stripe width = 0 stripe size in exponent = 0 type = jfs2 upperbound = 32 fs = vfs=jfs2:log=/dev/hd8:vol=/opt:free=false:quota=no time created = Thu Apr 28 20:26:36 2011 time modified = Thu Apr 28 20:40:38 2011
When you run the mirrorvg command, you will (by default) lock the volume group it is run against. This way, you have no way of knowing what the status is of the sync process that occurs after mirrorvg has run the mklvcopy commands for all the logical volumes in the volume group. Especially with very large volume groups, this can be a problem.
The solution however is easy: Make sure to run the mirrorvg command with the -s option, to prevent it to run the sync. Then, when mirrorvg has completed, run the syncvg yourself with the -P option.
For example, if you wish to mirror the rootvg from hdisk0 to hdisk1:
# mirrorvg -s rootvg hdisk1Of course, make sure the new disk is included in the boot list for the rootvg:
# bootlist -m normal hdisk0 hdisk1Now rootvg is mirrored, but not yet synced. Run "lsvg -l rootvg", and you'll see this. So run the syncvg command yourself. With the -P option you can specify the number of threads that should be started to perform the sync process. Usually, you can specify at least 2 to 3 times the number of cores in the system. Using the -P option has an extra feature: there will be no lock on the volume group, allowing you to run "lsvg rootvg" within another window, to check the status of the sync process.
# syncvg -P 4 -v rootvgAnd in another window:
# lsvg rootvg | grep STALE | xargs STALE PVs: 1 STALE PPs: 73
Topics: AIX, Oracle, SDD, Storage, System Admin↑
RAC OCR and VOTE LUNs
Consisting naming is nog required for Oracle ASM devices, but LUNs used for the OCR and VOTE functions of Oracle RAC environments must have the same device names on all RAC systems. If the names for the OCR and VOTE devices are different, create a new device for each of these functions, on each of the RAC nodes, as follows:
First, check the PVIDs of each disk that is to be used as an OCR or VOTE device on all the RAC nodes. For example, if you're setting up a RAC cluster consisting of 2 nodes, called node1 and node2, check the disks as follows:
As you can see, vpath6 on node 1 is the same disk as vpath4 on node 2. You can determine this by looking at the PVID.root@node1 # lspv | grep vpath | grep -i none vpath6 00f69a11a2f620c5 None vpath7 00f69a11a2f622c8 None vpath8 00f69a11a2f624a7 None vpath13 00f69a11a2f62f1f None vpath14 00f69a11a2f63212 None root@node2 /root # lspv | grep vpath | grep -i none vpath4 00f69a11a2f620c5 None vpath5 00f69a11a2f622c8 None vpath6 00f69a11a2f624a7 None vpath9 00f69a11a2f62f1f None vpath10 00f69a11a2f63212 None
Check the major and minor numbers of each device:
root@node1 # cd /dev
root@node1 # lspv|grep vpath|grep None|awk '{print $1}'|xargs ls -als
0 brw------- 1 root system 47, 6 Apr 28 18:56 vpath6
0 brw------- 1 root system 47, 7 Apr 28 18:56 vpath7
0 brw------- 1 root system 47, 8 Apr 28 18:56 vpath8
0 brw------- 1 root system 47, 13 Apr 28 18:56 vpath13
0 brw------- 1 root system 47, 14 Apr 28 18:56 vpath14
root#node2 # cd /dev
root@node2 # lspv|grep vpath|grep None|awk '{print $1}'|xargs ls -als
0 brw------- 1 root system 47, 4 Apr 29 13:33 vpath4
0 brw------- 1 root system 47, 5 Apr 29 13:33 vpath5
0 brw------- 1 root system 47, 6 Apr 29 13:33 vpath6
0 brw------- 1 root system 47, 9 Apr 29 13:33 vpath9
0 brw------- 1 root system 47, 10 Apr 29 13:33 vpath10
Now, on each node set up a consisting naming convention for the OCR and VOTE devices. For example, if you wish to set up 2 ORC and 3 VOTE devices:
On server node1:
On server node2:# mknod /dev/ocr_disk01 c 47 6 # mknod /dev/ocr_disk02 c 47 7 # mknod /dev/voting_disk01 c 47 8 # mknod /dev/voting_disk02 c 47 13 # mknod /dev/voting_disk03 c 47 14
This will result in a consisting naming convention for the OCR and VOTE devices on bothe nodes:mknod /dev/ocr_disk01 c 47 4 mknod /dev/ocr_disk02 c 47 5 mknod /dev/voting_disk01 c 47 6 mknod /dev/voting_disk02 c 47 9 mknod /dev/voting_disk03 c 47 10
root@node1 # ls -als /dev/*_disk* 0 crw-r--r-- 1 root system 47, 6 May 13 07:18 /dev/ocr_disk01 0 crw-r--r-- 1 root system 47, 7 May 13 07:19 /dev/ocr_disk02 0 crw-r--r-- 1 root system 47, 8 May 13 07:19 /dev/voting_disk01 0 crw-r--r-- 1 root system 47, 13 May 13 07:19 /dev/voting_disk02 0 crw-r--r-- 1 root system 47, 14 May 13 07:20 /dev/voting_disk03 root@node2 # ls -als /dev/*_disk* 0 crw-r--r-- 1 root system 47, 4 May 13 07:20 /dev/ocr_disk01 0 crw-r--r-- 1 root system 47, 5 May 13 07:20 /dev/ocr_disk02 0 crw-r--r-- 1 root system 47, 6 May 13 07:21 /dev/voting_disk01 0 crw-r--r-- 1 root system 47, 9 May 13 07:21 /dev/voting_disk02 0 crw-r--r-- 1 root system 47, 10 May 13 07:21 /dev/voting_disk03
If you run into the following error:
cl_mklv: Operation is not allowed because vg is a RAID concurrent volume group.This may be caused by the volume group being varied on, on the other node. If it should not be varied on, on the other node, run:
# varyoffvg vgAnd then retry the LVM command. If it continues to be a problem, then stop HACMP on both nodes, export the volume group and re-import the volume group on both nodes, and then restart the cluster.
Some applications, for example Oracle when using raw logical volumes, may require specific access to logical volumes. Oracle will require that the raw logical volume is owned by the oracle account, and it may or may not require custom permissions.
The default values for a logical volume are: dev_uid=0 (owned by user root), dev_gid=0 (owned by group system) and dev_perm=432 (mode 660). You can check the current settings of a logical volume by using the readvgda command:
If the logical volume was create with or has been modified to use customized owner/group/mode values, the dev_values will show the current uid/gid/perm values, for example:# readvgda vpath51 | egrep "lvname|dev_|Logical" lvname: testlv (i=2) dev_uid: 0 dev_gid: 0 dev_perm: 432
When the volume group is exported, and re-imported, this information is lost:# chlv -U user -G staff -P 777 testlv # ls -als /dev/*testlv 0 crwxrwxrwx 1 user staff 57, 3 Mar 10 14:39 /dev/rtestlv 0 brwxrwxrwx 1 user staff 57, 3 Mar 10 14:39 /dev/testlv # readvgda vpath51 | egrep "lvname|dev_|Logical" lvname: testlv (i=2) dev_uid: 3878 dev_gid: 1 dev_perm: 511
To avoid this from happening, make sure to use the -R option, that will restore any specific settings:# errpt # exportvg testvg # importvg -y testvg vpath51 testvg # ls -als /dev/*testlv 0 crw-rw---- 1 root system 57, 3 Mar 10 15:11 /dev/rtestlv 0 brw-rw---- 1 root system 57, 3 Mar 10 15:11 /dev/testlv
Never use the chown/chmod/chgrp commands to change the same settings on the logical volume. It will work, however, the updates will not be written to the VGDA, and as soon as the volume group is exported out and re-imported on the system, the updates will be gone:# chlv -U user -G staff -P 777 testlv # ls -als /dev/*testlv 0 crwxrwxrwx 1 user staff 57, 3 Mar 10 15:11 /dev/rtestlv 0 brwxrwxrwx 1 user staff 57, 3 Mar 10 15:11 /dev/testlv # readvgda vpath51 | egrep "lvname|dev_|Logical" lvname: testlv (i=2) dev_uid: 3878 dev_gid: 1 dev_perm: 511 # varyoffvg testvg # exportvg testvg importvg -Ry testvg vpath51 testvg # ls -als /dev/*testlv 0 crwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/rtestlv 0 brwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/testlv
Notice above how the chlv command changed the owner to root, the group to system, and the permissions to 660. Even after the chown and chmod commands are run, and the changes are visible on the device files in /dev, the changes are not seen in the VGDA. This is confirmed when the volume group is exported and imported, even with using the -R option:# chlv -U root -G system -P 660 testlv # ls -als /dev/*testlv 0 crw-rw---- 1 root system 57, 3 Mar 10 15:14 /dev/rtestlv 0 brw-rw---- 1 root system 57, 3 Mar 10 15:14 /dev/testlv # chown user.staff /dev/testlv /dev/rtestlv # chmod 777 /dev/testlv /dev/rtestlv # ls -als /dev/*testlv 0 crwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/rtestlv 0 brwxrwxrwx 1 user staff 57, 3 Mar 10 15:14 /dev/testlv # readvgda vpath51 | egrep "lvname|dev_|Logical" lvname: testlv (i=2) dev_uid: 0 dev_gid: 0 dev_perm: 360
So, when you have customized user/group/mode settings for logical volumes, and you need to export and import the volume group, always make sure to use the -R option when running importvg.# varyoffvg testvg # exportvg testvg # importvg -Ry testvg vpath51 testvg # ls -als /dev/*testlv 0 crw-rw---- 1 root system 57, 3 Mar 10 15:23 /dev/rtestlv 0 brw-rw---- 1 root system 57, 3 Mar 10 15:23 /dev/testlv
Also, make sure never to use the chmod/chown/chgrp commands on logical volume block and character devices in /dev, but use the chlv command instead, to make sure the VGDA is updated accordingly.
Note: A regular volume group does not store any customized owner/group/mode in the VGDA. It is only stored for Big or Scalable volume groups. In case you're using a regular volume group with customized owner/group/mode settings for logical volumes, you will have to use the chmod/chown/chgrp commands to update it, especially after exporting and re-importing the volume group.


