Wodim is an easy tool to write an ISO image to DVD, and it's included with Red Hat.
In order to write an ISO image to DVD, first start off by making sure what the device is of the DVD burner. Most often, it is /dev/sr0. To validate this, run:
# ls -als /dev/sr0If that's the correct device, all you need is an ISO image. Let's say, your ISO image is located in /path/to/image.iso. In that case, use the following command to write the ISO image to DVD:
# wodim dev=/dev/sr0 -v -data /path/to/image.iso
An ISO image or .iso (International Organization for Standardization) file is an archive file that contains a disk image called ISO 9660 file system format. Every ISO file have .ISO extension has defined format name taken from the ISO 9660 file system and specially used with CD/DVD Roms. In simple words an iso file is a disk image.
Typically an ISO image contains installation of software such as, operating system installation, games installation or any other applications. Sometimes it happens that we need to access files and view content from these ISO images, but without wasting disk space and time in burning them on to CD/DVD.
This article describes how to mount and unmount an ISO image on RHEL to access and list the content of ISO images.
To mount an ISO image, you must be logged in as root user and run the following commands from a terminal to create a mount point.
Once you created mount point, use the mount command to mount an iso file. We'll use a file called rhel-server-6.6-x86_64-dvd.iso for our example.# mkdir /mnt/iso
After the ISO image mounted successfully, go the mounted directory at /mnt/iso and list the content of an ISO image. It will only mount in read-only mode, so none of the files can be modified.# mount -t iso9660 -o loop /tmp/Fedora-18-i386-DVD.iso /mnt/iso/
You will see the list of files of an ISO image, that we have mounted in the above command.# cd /mnt/iso # ls -l
To unmount an ISO image, run the following command from the terminal as root:
# umount /mnt/iso
Red Hat cluster controls the startup and shutdown of all application components on all nodes within a cluster. To check the status of the cluster, to start, stop or failover resource groups Red Hat cluster's standard commands can be used.
Following is a list of some of cluster commands.
- To check cluster status: clustat
- To start cluster manager: service cman start (do on both nodes right away with in 60 seconds)
- To start cluster LVM daemon: service clvmd start (do on both nodes)
- To start Resource group manager: service rgmanager start (do on both nodes)
- To enables and starts the user service: clusvcadm -e service_name (check with clustat for available service names in your cluster)
- To disable and stops the user service: clusvcadm -d service_name (check with clustat for available service names in your cluster)
- To stop Resource group manager: service rgmanager stop
- To stop cluster LVM daemon: service clvmd stop
- To stop cluster manager: service cman stop (Do not stop CMAN at the same time on all nodes)
- To relocate user service: clusvcadm -r service_name (check with clustat for available service names in your cluster)
- To relocate user service: clusvcadm -r service_name (check with clustat for available service names in your cluster)
The following procedure can be used to copy the printer configuration from one AIX system to another AIX system. This has been tested using different AIX levels, and has worked great. This is particularly useful if you have more than just a few printer queues configured, and configuring all printer queues manually would be too cumbersome.
- Create a full backup of your system, just in case something goes wrong.
- Run lssrc -g spooler and check if qdaemon is active. If not, start it with startsrc -s qdaemon.
- Copy /etc/qconfig from the source system to the target system.
- Copy /etc/hosts from the source system to the target system, but be careful to not lose important entries in /etc/hosts on the target system (e.g. the hostname and IP address of the target system should be in /etc/hosts).
- On the target system, refresh the qconfig file by running: enq -d
- On the target system, remove all files in /var/spool/lpd/pio/@local/custom, /var/spool/lpd/pio/@local/dev and /var/spool/lpd/pio/@local/ddi.
- Copy the contents of /var/spool/lpd/pio/@local/custom on the source system to the target system into the same folder.
- Copy the contents of /var/spool/lpd/pio/@local/dev on the source system to the target system into the same folder.
- Copy the contents of /var/spool/lpd/pio/@local/ddi on the source system to the target system into the same folder.
- Create the following script, called newq.sh, and run it:
#!/bin/ksh let counter=0 cp /usr/lpp/printers.rte/inst_root/var/spool/lpd/pio/@local/smit/* \ /var/spool/lpd/pio/@local/smit cd /var/spool/lpd/pio/@local/custom chmod 775 /var/spool/lpd/pio/@local/custom for FILE in `ls` ; do let counter="$counter+1" chmod 664 $FILE QNAME=`echo $FILE | cut -d':' -f1` DEVICE=`echo $FILE | cut -d':' -f2` echo $counter : chvirprt -q $QNAME -d $DEVICE chvirprt -q $QNAME -d $DEVICE done
- Test and confirm printing is working.
- Remove file newq.sh.
This is how you update your HMC form version 7.9.0 to service pack 3 and all necessary fixes. At the time of writing, service pack 3 is the latest available service pack, and there are 2 fixes available for V7 R7.9.0 SP3, called MH01587 and MH01605. So the following procedure assumes that your HMC is currently at the base level of version 7.9.0, without any additional fixes or service packs installed.
This procedure is completely command line based. For this to work, you need to be able to ssh into the HMC using the hscroot user. For example, if your HMC is called yourhmc, you should be able to do this:
# ssh -l hscroot yourhmcWe also need to make sure we have some backups. Start with saving some output:
The information outputted by the lshmc command is useful to determine what is currently installed on the HMC.# lshmc -v # lshmc -V # lshmc -n # lshmc -r
Next, take a console data backup of the HMC:
The bkconsdata command above will backup the console data of the HMC via NFS to host 10.11.12.13 (replace with your own server name of IP address), and will store it in /mksysb/HMC/backupfile (replace /mksysb/HMC and backupfile in the bkconsdata command above to represent the correct location to back up to on your NFS server).# bkconsdata -r nfs -h 10.11.12.13 -l /mksysb/HMC -d backupfile
Mext, make a backup of the profiles for each managed server:
The bkprofdata command above requires the name of each managed system. A good way to know the names of the managed systems configured on the HMC, is by running the following command:# bkprofdata -m-f --force
# lssysconn -r allNow that we have all the necessary backups, it's time to perform the actual update.
Let's start with the update to Service Pack 3:
# updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/updates/HMC_Update_V7R790_SP3.iso -rThis will download the service pack from the IBM site to the HMC via FTP and update the HMC, and reboot it. This may take a while. The updhmc command may return a prompt after the download is completed, but that does not mean the update has occurred already. Please allow it to install and reboot. A message will be shown on the screen *The system is shutting down for reboot now". After the reboot, run the "lshmc -V" command again. It may take some time for the lshmc command will respond with proper output. Again, give it some time. As soon as the lshmc command shows that the service pack is installed, then you can move forward to the next step.
The next step is installing the fixes:
# updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/fixes/MH01587.iso -rAnd...
# updhmc -t s -h ftp.software.ibm.com -u anonymous -p ftp -f /software/server/hmc/fixes/MH01605.iso -rAfter each fix is installed, the HMC will reboot, and you'll have to check with "lshmc -V" if the fix is installed.
And that concludes the update. If any new service packs and or fixes are released by IBM you can install them in a similar fashion.
If you have a LPAR that is not booting from your NIM server, and you're certain the IP configuration on the client is correct, for example by completing a successful ping test, then you should have a look at the bootp process on the NIM server as a possible cause of the issue.
To accomplish this, you can put bootp into debug mode. Edit file /etc/inetd.conf, and comment out the bootps entry with a hash mark (#). This will help to avoid bootp being started by the inetd in response to a bootp request. Then refresh the inetd daemon, to pick up the changes to file /etc/inetd.conf:
Now check if any bootpd processes are running. If necessary, use kill -9 to kill them. Again check if no more bootpd processes are active. Now that bootp has stopped go ahead and bring up another PuTTY window on your NIM master. You'll need another window opened, because putting bootp into debug is going to lock the window, while it is active. Run the following command in that window:# refresh -s inetd
# bootpd -d -d -d -d -sNow you can retry to boot the LPAR from your NIM master, and you should see information scrolling by of what is going on.
Afterwards, once you've identified the issue, make sure to stop the bootpd process (just hit ctrl-c to make it stop), and change file /etc/inetd.conf back the way it was, and run refresh -s inetd to refresh it again.
On Linux systems, a tmpfs filesystem keeps the entire filesystem (with all its files) in virtual memory. All data is stored in memory, which means the data is temporary and will be lost after a reboot. If you unmount the filesystem, all data in the file system is gone. You can also a lot of installations using a tmpfs for /tmp and hence anything written to /tmp is wiped after a reboot.
To increase the size, do the following:
Modify /etc/fstab line to look something like this:
none /raw tmpfs defaults,size=2G 0 0Then, re-mount the file system:
# mount -o remount /raw # df -hNote: Be careful not to increase it too much as the system will use up real memory.
The following is a procedure to add shared storage to a clustered, virtualized environment. This assumes the following: You have a PowerHA cluster on two nodes, nodeA and nodeB. Each node is on a separate physical system, and each node is a client of a VIOS. The storage from the VIOS is mapped as vSCSI to the client. Client nodeA is on viosA, and client nodeB is on viosB. Futhermore, this procedure assumes you're using SDDPCM for multi-pathing on the VIOS.
First of all, have your storage admin allocate and zone shared LUN(s) to the two VIOS. This needs to be one or more LUNs that is zoned to both of the VIOS. This procedure assumes you will be zoning 4 LUNs of 128 GB.
Once that is completed, then move to work on the VIOS:
SERVER: viosA
First, gather some system information as user root on the VIOS, and save this information to a file for safe-keeping.
Discover new SAN LUNs (4 * 128 GB) as user padmin on the VIOS. This can be accomplished by running cfgdev, the alternative to cfgmgr on the VIOS. Once that has run, identify the 4 new hdisk devices on the system, and run the "bootinfo -s" command to determine the size of each of the 4 new disks:# lspv # lsdev -Cc disk # /usr/ios/cli/ioscli lsdev -virtual # lsvpcfg # datapath query adapter # datapath query device # lsmap -all
Change PVID for the disks (repeat for all the LUNs):# cfgdev # lspv # datapath query device # bootinfo -s hdiskX
# chdev -l hdiskX -a pv=yesNext, map the new LUN from viosA to the nodeA lpar. You'll need to know 2 things here: [a] What vhost adapter (or "vadapter) to use, and [b] what name to give the new device (or "virtual target device"). Have a look at the output of the "lsmap -all" command that you ran previously. That will provide you information on the current naming scheme for the virtual target devices. Also, it will show you what vhost adapters already exist, and are in use for the client. In this case, we'll assume the vhost adapter is vhost0, and there are already some virtual target devices, called: nodeA_vtd0001 through nodeA_vtd0019. The new four LUNs therefore will be named: nodeA_vtd0020 through nodeA_vtd0023. We'll also assume the new disks are numbered hdisk44 through hdisk47.
Now the mapping of the LUNs is complete on viosA. You'll have to repeat the same process on viosB:# mkvdev -vdev hdisk44 -vadapter vhost0 -dev nodeA_vtd0020 # mkvdev -vdev hdisk45 -vadapter vhost0 -dev nodeA_vtd0021 # mkvdev -vdev hdisk46 -vadapter vhost0 -dev nodeA_vtd0022 # mkvdev -vdev hdisk47 -vadapter vhost0 -dev nodeA_vtd0023
SERVER: viosB
First, gather some system information as user root on the VIOS, and save this information to a file for safe-keeping.
Discover new SAN LUNs (4 * 128 GB) as user padmin on the VIOS. This can be accomplished by running cfgdev, the alternative to cfgmgr on the VIOS. Once that has run, identify the 4 new hdisk devices on the system, and run the "bootinfo -s" command to determine the size of each of the 4 new disks:# lspv # lsdev -Cc disk # /usr/ios/cli/ioscli lsdev -virtual # lsvpcfg # datapath query adapter # datapath query device # lsmap -all
No need to set the PVID this time. It was already configured on viosA, and after running the cfgdev command, the PVID should be visible on viosB, and it should match the PIVIDs on viosA. Make sure this is correct:# cfgdev # lspv # datapath query device # bootinfo -s hdiskX
# lspvMap the new LUN from viosB to the nodeB lpar. Again, you'll need to know the vadapter and the virtual target device names to use, and you can derive that information by looking at the output of the "lsmap -all" command. If you've done your work correctly in the past, the naming of the vadapter and the virtual target devices will probably be the same on viosB as on viosA:
Now that the mapping on both the VIOS has been completed, it is time to move to the client side. First, gather some information about the PowerHA cluster on the clients, by running as root on the nodeA client:# mkvdev -vdev hdisk44 -vadapter vhost0 -dev nodeB_vtd0020 # mkvdev -vdev hdisk45 -vadapter vhost0 -dev nodeB_vtd0020 # mkvdev -vdev hdisk46 -vadapter vhost0 -dev nodeB_vtd0020 # mkvdev -vdev hdisk47 -vadapter vhost0 -dev nodeB_vtd0020
Run cfgmgr on nodeA to discover the mapped LUNs, and then on nodeB:# clstat -o # clRGinfo # lsvg |lsvg -pi
Ensure that the disk attributes are correctly set on both servers. Repeat the following command for all 4 new disks:# cfgmgr # lspv
# chdev -l hdiskX -a algorithm=fail_over -a hcheck_interval=60 -a queue_depth=20 -a reserve_policy=no_reserveNow you can add the 4 new added physical volumes to a shared volume group. In our example, the shared volume group is called sharedvg, and the newly discovered disks are called hdisk55 through hdisk58. Finally, the concurrent resource group is called concurrent_rg.
# /usr/es/sbin/cluster/sbin/cl_extendvg -cspoc -g'concurrent_rg' -R'nodeA' sharedvg hdisk55 hdisk56 hdisk57 hdisk58Next, you can move forward to creating logical volumes (and file systems if necessary), for example, when creating raw logical volumes for an Oracle database:
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw5 sharedvg 1023 hdisk55Finally, verify the volume group:
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw6 sharedvg 1023 hdisk56
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw7 sharedvg 1023 hdisk57
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw8 sharedvg 1023 hdisk58
If necessary, these are the steps to complete, if the addition of LUNs has to be backed out:# lsvg -p sharedvg # lsvg sharedvg # ls -l /dev/asm_raw*
- Remove the raw logical volumes (using the cl_rmlv command)
- Remove the added LUNs from the volume group (using the cl_reducevg command)
- Remove the disk devices on both client nodes: rmdev -dl hdiskX
- Remove LUN mappings from each VIOS (using the rmvdev command)
- Remove the LUNs frome each VIOS (using the rmdev command)
PuTTY itself does not provide a means to export the list of sessions, nor a way to import the sessions from another computer. However, it is not so difficult, once you know that PuTTY stores the session information in the Windows Registry.
To export the Putty sessions, run:
regedit /e "%userprofile%\desktop\putty-sessions.reg" HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\SessionsOr, to export just all settings (and not only the sessions, run:
regedit /e "%userprofile%\desktop\putty.reg" HKEY_CURRENT_USER\Software\SimonTathamThis will create either a putty-sessions.reg or putty.reg file on your Windows dekstop. You can transfer these files over to another computer, and after installing PuTTY on the other computer, simply double-click on the reg file, to have the Windows Registry entries added. Then, if you start up PuTTY, all the sessions information should be there.
This blog will display the steps required to identify an IO problem in the storage area network and/or disk arrays on AIX.
Note: Do not execute filemon with AIX 6.1 Technology Level 6 Service Pack 1 if WebSphere MQ is running. WebSphere MQ will abnormally terminate with this AIX release.
Running filemon: As a rule of thumb, a write to a cached fiber attached disk array should average less than 2.5 ms and a read from a cached fiber attached disk array should average less than 15 ms. To confirm the responsiveness of the storage area network and disk array, filemon can be utilized. The following example will collect statistics for a 90 second interval.
Then, review the generated report (/tmp/filemon.rpt).# filemon -PT 268435184 -O pv,detailed -o /tmp/filemon.rpt;sleep 90;trcstop Run trcstop command to signal end of trace. Tue Sep 15 13:42:12 2015 System: AIX 6.1 Node: hostname Machine: 0000868CF300 [filemon: Reporting started] # [filemon: Reporting completed] [filemon: 90.027 secs in measured interval]
In the above report, hdisk11 was the busiest disk on the system during the 90 second sample. The reads from hdisk11 averaged 11.111 ms. Since this is less than 15 ms, the storage area network and disk array were performing within scope for reads.# more /tmp/filemon.rpt . . . ------------------------------------------------------------------------ Detailed Physical Volume Stats (512 byte blocks) ------------------------------------------------------------------------ VOLUME: /dev/hdisk11 description: XP MPIO Disk P9500 (Fibre) reads: 437296 (0 errs) read sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 read times (msec): avg 11.111 min 0.122 max 75.429 sdev 0.347 read sequences: 1 read seq. lengths: avg 3498368.0 min 3498368 max 3498368 sdev 0.0 seeks: 1 (0.0%) seek dist (blks): init 3067240 seek dist (%tot blks):init 4.87525 time to next req(msec): avg 0.206 min 0.018 max 461.074 sdev 1.736 throughput: 19429.5 KB/sec utilization: 0.77 VOLUME: /dev/hdisk12 description: XP MPIO Disk P9500 (Fibre) writes: 434036 (0 errs) write sizes (blks): avg 8.1 min 8 max 56 sdev 1.4 write times (msec): avg 2.222 min 0.159 max 79.639 sdev 0.915 write sequences: 1 write seq. lengths: avg 3498344.0 min 3498344 max 3498344 sdev 0.0 seeks: 1 (0.0%) seek dist (blks): init 3067216 seek dist (%tot blks):init 4.87521 time to next req(msec): avg 0.206 min 0.005 max 536.330 sdev 1.875 throughput: 19429.3 KB/sec utilization: 0.72 . . .
Also, hdisk12 was the second busiest disk on the system during the 90 second sample. The writes to hdisk12 averaged 2.222 ms. Since this is less than 2.5 ms, the storage area network and disk array were performing within scope for writes.
Other methods to measure similar information:
You can use the topas command using the -D option to get an overview of the most busiest disks on the system:
# topas -DIn the output, columns ART and AWT provide similar information. ART stands for the average time to receive a response from the hosting server for the read request sent. And AWT stands for the average time to receive a response from the hosting server for the write request sent.
You can also use the iostat command, using the -D (for drive utilization) and -l (for long listing mode) options:
# iostat -Dl 60This will provide an overview over a 60 second period of your disks. The "avg serv" column under the read and write sections will provide you average service times for reads and writes for each disk.
An occasional peak value recorded on a system, doesn't immediately mean there is a disk bottleneck on the system. It requires longer periods of monitoring to determine if a certain disk is indeed a bottleneck for your system.
Displaying results: 91 - 100.


