Topics: AIX, Backup & restore, NIM, System Admin
Using the image_data resource to restore a mksysb without preserving mirrors using NIM
Specify using the 'image_data' resource when running the 'bosinst' command from the NIM master:
From command line on the NIM master:
# nim -o bos_inst -a source=mksysb -a lpp_source=[lpp_source] -a spot=[SPOT] -a mksysb=[mksysb] -a image_data=mksysb_image_data -a accept_licenses=yes server1Using smit on the NIM master:
# smit nim_bosinstSelect the client to install. Select 'mksysb' as the type of install. Select a SPOT at the same level as the mksysb you are installing. Select an lpp_source at the same level than the mksysb you are installing.
NOTE: It is recommended to use an lpp_source at the same AIX Technology Level, but if using an lpp_source at a higher level than the mksysb, the system will be updated to the level of the lpp_source during installation. This will only update Technology Levels.
If you're using an AIX 5300-08 mksysb, you cannot use an AIX 6.1 lpp_source. This will not migrate the version of AIX you are running to a higher version. If you're using an AIX 5300-08 mksysb and allocate a 5300-09 lpp_source, this will update your target system to 5300-09.
Install the Base Operating System on Standalone Clients
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Installation Target server1
* Installation TYPE mksysb
* SPOT SPOTaix53tl09sp3
LPP_SOURCE [LPPaix53tl09sp3]
MKSYSB server1_mksysb
BOSINST_DATA to use during installation []
IMAGE_DATA to use during installation [server1_image_date]Topics: AIX, Backup & restore, NIM, System Admin↑
How to unconfigure items after mksysb recovery using NIM
There will be a situation where you want to test a mksysb recovery to a different host. The major issue with this is, that you bring up a server within the same network, that is a copy of an actual server that's already in your network. To avoid running into 2 exactly the same servers in your network, here's how you do this:
First make sure that you have a separate IP address available for the server to be recovered, for configuration on your test server. You definitely don't want to bring up a second server in your network with the same IP configuration.
Make sure you have a mksysb created of the server that you wish to recover onto another server. Then, create a simple script that disables all the items that you don't want to have running after the mksysb recovery, for example:
# cat /export/nim/cust_scripts/custom.ksh
#!/bin/ksh
# Save a copy of /etc/inittab
cp /etc/inittab /etc/inittab.org
# Remove unwanted entries from the inittab
rmitab hacmp 2>/dev/null
rmitab tsmsched 2>/dev/null
rmitab tsm 2>/dev/null
rmitab clinit 2>/dev/null
rmitab pst_clinit 2>/dev/null
rmitab qdaemon 2>/dev/null
rmitab sddsrv 2>/dev/null
rmitab nimclient 2>/dev/null
rmitab nimsh 2>/dev/null
rmitab naviagent 2>/dev/null
# Get rid of the crontabs
mkdir -p /var/spool/cron/crontabs.org
mv /var/spool/cron/crontabs/* /var/spool/cron/crontabs.org/
# Disable start scripts
chmod 000 /etc/rc.d/rc2.d/S01app
# copy inetd.conf
cp /etc/inetd.conf /etc/inetd.conf.org
# take out unwanted items
cat /etc/inetd.conf.org | grep -v bgssd > /etc/inetd.conf
# remove the hacmp cluster configuration
if [ -x /usr/es/sbin/cluster/utilities/clrmclstr ] ; then
/usr/es/sbin/cluster/utilities/clrmclstr
fi
# clear the error report
errclear 0
# clean out mail queue
rm /var/spool/mqueue/*
The next thing you need to do, is to configure this script as a 'script resource' in NIM. Run:
# smitty nim_mkresSelect 'script' and complete the form afterwards. For example, if you called it 'UnConfig_Script':
Then, when you are ready to perform the actual mksysb recovery using "smitty nim_bosinst", you can add this script resource on the following line:# lsnim -l UnConfig_Script UnConfig_Script: class = resources type = script comments = Rstate = ready for use prev_state = unavailable for use location = /export/nim/cust_scripts/custom.ksh alloc_count = 0 server = master
Customization SCRIPT to run after installation [UnConfig_Script]
Topics: AIX, Backup & restore↑
NFS mksysb script
Here's a script you can use to run mksysb backups of your clients to a NFS server. It is generally a good idea to set up a NIM server and also use this NIM server as a NFS server. All your clients should then be configured to create their mksysb backups to the NIM/NFS server, using the script that you can download here: nimbck.ksh.
By doing this, the latest mksysb images are available on the NIM server. This way, you can configure a mksysb resource on the NIM server (use: smitty nim_mkres) pointing to the mksysb image of a server, for easy recovery.
Topics: AIX, EMC, SAN, Storage, System Admin↑
Unable to remove hdiskpower devices due to a method error
If you get a method error when trying to rmdev -dl your hdiskpower devices, then follow this procedure.
Cannot remove hdiskpower devices with rmdev, get error "method error (/etc/methods/ucfgpowerdisk):"The fix is to uninstall/reinstall Powerpath, but you won't be able to until you remove the hdiskpower devices with this procedure:
# odmdelete -q name=hdiskpowerX -o CuDv
(for every hdiskpower device)# odmdelete -q name=hdiskpowerX -o CuAt
(for every hdiskpower device)# odmdelete -q name=powerpath0 -o CuDv
# odmdelete -q name=powerpath0 -o CuAt
# rm /dev/powerpath0
- You must remove the modified files installed by powerpath and then reboot the server. You will then be able to uninstall powerpath after the reboot via the "installp -u EMCpower" command. The files to be removed are as follows:
(Do not be concerned if some of the removals do not work as PowerPath may not be fully configured properly).# rm ./etc/PowerPathExtensions # rm ./etc/emcp_registration # rm ./usr/lib/boot/protoext/disk.proto.ext.scsi.pseudo.power # rm ./usr/lib/drivers/pnext # rm ./usr/lib/drivers/powerdd # rm ./usr/lib/drivers/powerdiskdd # rm ./usr/lib/libpn.a # rm ./usr/lib/methods/cfgpower # rm ./usr/lib/methods/cfgpowerdisk # rm ./usr/lib/methods/chgpowerdisk # rm ./usr/lib/methods/power.cat # rm ./usr/lib/methods/ucfgpower # rm ./usr/lib/methods/ucfgpowerdisk # rm ./usr/lib/nls/msg/en_US/power.cat # rm ./usr/sbin/powercf # rm ./usr/sbin/powerprotect # rm ./usr/sbin/pprootdev # rm ./usr/lib/drivers/cgext # rm ./usr/lib/drivers/mpcext # rm ./usr/lib/libcg.so # rm ./usr/lib/libcong.so # rm ./usr/lib/libemcp_mp_rtl.so # rm ./usr/lib/drivers/mpext # rm ./usr/lib/libmp.a # rm ./usr/sbin/emcpreg # rm ./usr/sbin/powermt # rm ./usr/share/man/man1/emcpreg.1 # rm ./usr/share/man/man1/powermt.1 # rm ./usr/share/man/man1/powerprotect.1
- Re-install Powerpath.
How best to configure the /etc/netsvc.conf file, making it easier to troubleshoot when resolving DNS issues: This file should resolve locally and through DNS. The line would read:
hosts=local,bindYou then would need to make sure that all the local adapter IP addresses are entered in /etc/hosts. After that is complete, for every adapter on the system you would apply:
# hostThis will ensure a host command generates the same ouput (the hostname) with and without /etc/netsvc.conf. That way, you'll know you can continue to do certain things while troubleshooting a DNS problem.
Here are a couple of rules that your paging spaces should adhere to, for best performance:
- The size of paging space should match the size of the memory.
- Use more than one paging space, on different disks to each other.
- All paging spaces should have the same size.
- All paging spaces should be mirrored.
- Paging spaces should not be put on "hot" disks.
The command svmon -G can be used to determine the actual memory consumption of a server. To determine if the memory is over-committed, you need to divide the memory-virtual value by the memory-size value, e.g.:
# svmon -G
size inuse free pin virtual
memory 5079040 5076409 2631 706856 2983249
pg space 7864320 12885
work pers clnt other
pin 540803 0 2758 163295
in use 2983249 0 2093160
PageSize PoolSize inuse pgsp pin virtual
s 4 KB - 4918761 12885 621096 2825601
m 64 KB - 9853 0 5360 9853
In this example, the memory-virtual value is 2983249, and the memory-size value is 5079040. Note that the actual memory-inuse (5076409) is nearly the same as the memory-size (5079040) value. This is simply AIX caching as much as possible in its memory. Hence, the memory-free value is typically very low, 2631 in the example above. As such, determining the memory size based on the memory-free value does not provide a good interpretation of the actual memory consumption, as memory typically includes a lot of cached data.
Now, to determine the actual memory consumption, divide memory-virtual by memory-size:
Thus, the actual memory consumption is 58% of the memory. The size of the memory is 5079040 blocks of 4 KB = 19840 MB. The free memory is thus: (100% - 58%) * 19840 MB = 8332 MB.# bc scale=2 2983249/5079040 .58
Try to keep the value of memory consumption less than 90%. Above that, you will generally start seeing paging activity using the vmstat command. By that time, it is a good idea to lower the load on the system or to get more memory in your system.
No more ordering CDROMs or DVDs and waiting days. Download the .iso image over the web and install from there. Use the virtual DVD drive on your VIOS 2.1 and install directly into the LPAR or read the contents into your NIM Server.
Mount the .ISO image:
- On AIX 6.1 or AIX 7, use the loopmount command: http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds3/loopmount.htm
- On AIX 5.3, use the mklv-dd-mount trick: https://www.ibm.com/developerworks/wikis/display/WikiPtype/AIXV53MntISO
You have to prove you are entitled via: Customer number, Machine serial numbers or SWMA. The Entitled Software Support Download User Guide can be downloaded here: ftp://public.dhe.ibm.com/systems/support/planning/essreg/I1128814.pdf. Then you can download the AIX media, Expansion Packs, Linux Toolbox and more.
Start at: www.ibm.com/eserver/ess.
Memory utilization on AIX systems typically runs around 100%. This is often a source of concern. However, high memory utilization in AIX does not imply the system is out of memory. By design, AIX leaves files it has accessed in memory. This significantly improves performance when AIX reaccesses these files because they can be reread directly from memory, not disk. When AIX needs memory, it discards files using a "least used" algorithm. This generates no I/O and has almost no performance impact under normal circumstances.
Sustained paging activity is the best indication of low memory. Paging activity can be monitored using the "vmstat" command. If the "page-in" (PI) and "page-out" (PO) columns show non-zero values over "long" periods of time, then the system is short on memory. (All systems will show occasional paging, which is not a concern.)
Memory requirements for applications can be empirically determined using the AIX "rmss"command. The "rmss" command is a test tool that dynamically reduces usable memory. The onset of paging indicates an application's minimum memory requirement.
Finally, the "svmon" command can be used to list how much memory is used each process. The interpretation of the svmon output requires some expertise. See the AIX documentation for details.
To test the performance gain of leaving a file in memory, a 40MB file was read twice. The first read was from disk, the second was from memory. The first read took 10.0 seconds. The second read took 1.3 second: a 7.4x improvement.
Topics: AIX, Storage, System Admin↑
Using NFS
The Networked File System (NFS) is one of a category of filesystems known as distributed filesystems. It allows users to access files resident on remote systems without even knowing that a network is involved and thus allows filesystems to be shared among computers. These remote systems could be located in the same room or could be miles away.
In order to access such files, two things must happen. First, the remote system must make the files available to other systems on the network. Second, these files must be mounted on the local system to be able to access them. The mounting process makes the remote files appear as if they are resident on the local system. The system that makes its files available to others on the network is called a server, and the system that uses a remote file is called a client.
NFS Server
NFS consists of a number of components including a mounting protocol, a file locking protocol, an export file and daemons (mountd, nfsd, biod, rpc.lockd, rpc.stad) that coordinate basic file services.
Systems using NFS make the files available to other systems on the network by "exporting" their directories to the network. An NFS server exports its directories by putting the names of these directories in the /etc/exports file and executing the exportfs command. In its simplest form, /etc/exports consists of lines of the form:
pathname -option, option ...Where pathname is the name of the file or directory to which network access is to be allowed; if pathname is a directory, then all of the files and directories below it within the same filesystem are also exported, but not any filesystems mounted within it. The next fields in the entry consist of various options that specify the type of access to be given and to whom. For example, a typical /etc/exports file may look like this:
This export file permits the filesystem /cyclops/users to be mounted by homer and bart, and allows root access to it from homer. In addition, it lets /usr/share/man to be mounted by marge, maggie and lisa. The filesystem /usr/mail can be mounted by any system on the network. Filesystems listed in the export file without a specific set of hosts are mountable by all machines. This can be a sizable security hole./cyclop/users -access=homer:bart, root=homer /usr/share/man -access=marge:maggie:lisa /usr/mail
When used with the -a option, the exportfs command reads the /etc/exports file and exports all the directories listed to the network. This is usually done at system startup time.
# exportfs -vaIf the contents of /etc/exports change, you must tell mountd to reread it. This can be done by re-executing the exportfs command after the export file is changed.
The exact attributes that can be specified in the /etc/exports file vary from system to system. The most common attributes are:
- -access=list : Colon-separated list of hostnames and netgroups that can mount the filesystem.
- -ro : Export read-only; no clients may write on the filesystem.
- -rw=list : List enumerates the hosts allowed to mount for writing; all others must mount read-only.
- -root=list : Lists hosts permitted to access the filesystem as root. Without this option, root access from a client is equivalent to access by the user nobody (usually UID -1).
- -anon : Specifies UID that should be used for requests coming from an unknown user. Defaults to nobody.
- -hostname : Allow hostname to mount the filesystem.
/cyclop/users -rw=moe,anon=-1 /usr/inorganic -roThis allows moe to mount /cyclop/users for reading and writing, and maps anonymous users (users from other hosts that do not exist on the local system and the root user from any remote system) to the UID -1. This corresponds to the nobody account, and it tells NFS not to allow such users access to anything.
NFS Clients
After the files, directories and/or filesystems have been exported, an NFS client must explicitly mount them before it can use them. It is handled by the mountd daemon (sometimes called rpc.mountd). The server examines the mount request to be sure the client has proper authorization.
The following syntax is used for the mount command. Note that the name of the server is followed by a colon and the directory to be mounted:
# mount server1:/usr/src /srcHere, the directory structure /usr/src resident on the remote system server1 is mounted on the /src directory on the local system.
When the remote filesystem is no longer needed, it is unmounted with the umount:
# umount server1:/usr/srcThe mount command can be used to establish temporary network mounts, but mounts that are part of a system's permanent configuration should be either listed in /etc/filesystems (for AIX) or handled by an automatic mounting service such as automount or amd.
NFS Commands
- lsnfsexp : Displays the characteristics of directories that are exported with the NFS.
# lsnfsexp software -ro
- mknfsexp -d path -t ro : Exports a read-only directory to NFS clients and add it to /etc/exports.
# mknfsexp -d /software -t ro /software ro Exported /software # lsnfsexp /software -ro
- rmnfsexp -d path : Unexports a directory from NFS clients and remove it from /etc/exports.
# rmnfsexp -d /software
- lsnfsmnt : Displays the characteristics of NFS mountable file systems.
- showmount -e : List exported filesystems.
# showmount -e export list for server: /software (everyone)
- showmount -a : List hosts that have remotely mounted local systems.
# showmount -a server2:/sourcefiles server3:/datafiles
In the following discussion, reference to daemon implies any one of the SRC-controlled daemons (such as nfsd or biod).
The NFS daemons can be automatically started at system (re)start by including the /etc/rc.nfs script in the /etc/inittab file.
They can also be started manually by executing the following command:
# startsrc -s Daemon or startsrc -g nfsWhere the -s option will start the individual daemons and -g will start all of them.
These daemons can be stopped one at a time or all at once by executing the following command:
# stopsrc -s Daemon or stopsrc -g nfsYou can get the current status of these daemons by executing the following commands:
If the /etc/exports file does not exist, the nfsd and the rpc.mountd daemons will not start. You can get around this by creating an empty /etc/exports file. This will allow the nfsd and the rpc.mountd daemons to start, although no filesystems will be exported.# lssrc -s [Daemon] # lssrc -a


