There is a way to mount a share from a windows system as an NFS filesystem in AIX:
- Install the CIFS software on the AIX server (this is part of AIX itself: bos.cifs_fs).
- Create a folder on the windows machine, e.g. D:\share.
- Create a local user, e.g. "share" (user IDs from Active Directory can not be used): Settings -> Control Panel -> User Accounts -> Advanced tab -> Advanced button -> Select Users -> Right click in right window and select "New User" -> Enter User-name, password twice, deselect "User must change password at next logon" and click on create and close and ok.
- Make sure the folder on the D: drive (in this case "share") is shared and give the share a name (we'll use "share" again as name in this example) and give "full control" permissions to "Everyone".
- Create a mountpoint on the AIX machine to mount the windows share on, e.g. /mnt/share.
- Type on the AIX server as user root:
# mount -v cifs -n hostname/share/password -o uid=201,fmode=750 /share /mnt/share
- You're done!
Permanently change hostname for inet0 device in the ODM by choosing one of the following:
Command line method:
# chdev -l inet0 -a hostname=[newhostname]SMIT fastpath method:
# smitty hostnameChange the name of the node which changes the uname process by choosing one of the following:
Command line method:
# uname -S [newhostname]Or run the following script:
# /etc/rc.netChange the hostname on the current running system:
# hostname [newhostname]Change the /etc/hosts file to reflect the new hostname. Change DNS name server, if applicable.
It may happen that a virtual terminal (vterm) from an HMC GUI only showes a black screen, even though the Lpar is running perfectly. Here's a solution to this problem:
- Login to the HMC using ssh as hscroot.
- Run lssscfg -R sys to determine the machine name of your lpar on the HMC.
- Run mkvterm -m [machine-name] -p [partition-name].
- You can end this session by typing "~." or "~~." (don't overlook the "dot" here!).
- Now go back to your HMC gui via WebBased System Manager and start-up a new vterm. It works again!
Topics: AIX, Installation, System Admin↑
Installation history
A very easy way to see what was installed recently on your system:
# lslpp -h
It is very easy to clone your rootvg to another disk, for example for testing purposes. For example: If you wish to install a piece of software, without modifying the current rootvg, you can clone a rootvg disk to a new disk; start your system from that disk and do the installation there. If it succeeds, you can keep using this new rootvg disk; If it doesn't, you can revert back to the old rootvg disk, like nothing ever happened.
First, make sure every logical volume in the rootvg has a name that consists of 11 characters or less (if not, the alt_disk_copy command will fail).
To create a copy on hdisk1, type:
alt_disk_copy -d hdisk1If you now restart your system from hdisk1, you will notice, that the original rootvg has been renamed to old_rootvg. To delete this volume group (in case you're satisfied with the new rootvg), type:
# alt_rootvg_op -X old_rootvgA very good article about alternate disk installs can be found on developerWorks.
If you wish to copy a mirrored rootvg to two other disks, make sure to use quotes around the target disks, e.g. if you wish to create a copy on disks hdisk4 and hdisk5, run:
# alt_disk_copy -d "hdisk4 hdisk5"
The AIX kernel has an "enter_dbg" variable in it that can be set at the beginning of the boot processing which will cause all boot process output to be sent to the system console. In some cases, this data can be useful in debugging boot issues. The procedure for setting the boot debugger is as follows:
First: Preparing the system.
Set up KDB to present an initial debugger screen
# bosboot -ad /dev/ipldevice -IReboot the server:
# shutdown -FrSetting up for Kernel boot trace:
When the debugger screen appears, set enter_dbg to the value we want to use:
************* Welcome to KDB *************
Call gimmeabreak...
Static breakpoint:
.gimmeabreak+000000 tweq r8,r8 r8=0000000A
.gimmeabreak+000004 blr
<.kdb_init+0002C0> r3=0
KDB(0)> mw enter_dbg
enter_dbg+000000: 00000000 = 42
xmdbg+000000: 00000000 = .
KDB(0)> g
Now, detailed boot output will be displayed on the console.
If your system completes booting, you will want to turn enter_dbg off:
************* Welcome to KDB *************
Call gimmeabreak...
Static breakpoint:
.gimmeabreak+000000 tweq r8,r8 r8=0000000A
.gimmeabreak+000004 blr
<.kdb_init+0002C0> r3=0
KDB(0)> mw enter_dbg
enter_dbg+000000: 00000042 = 0
xmdbg+000000: 00000000 = .
KDB(0)> g
When finished using the boot debugger, disable it by running:
# bosdebug -o
# bosboot -ad /dev/ipldevice
It is possible to stop and start an LPAR from the HMC prompt:
# lssycfg -r lparThis command will list all partitions known to this HMC.
# chsysstate -o osshutdown -r lpar -n [partition name]This command will send a shutdown OS command to the lpar.
# chsysstate -o on -r lpar -n [partition name]This command will activate the partition.
# lsrefcode -r lpar -F lpar_name,refcodeThis command will show the LED code.
An adapter that has previously been added to a LPAR and now needs to be removed, usually doesn't want to be removed from the LPAR, because it is in use by the LPAR. Here's how you find and remove the involved devices on the LPAR:
First, run:
# lsslot -c pciThis will find the adapter involved.
Then, find the parent device of a slot, by running:
# lsdev -Cl [adapter] -F parent(Fill in the correct adapter, e.g. fcs0).
Now, remove the parent device and all its children:
# rmdev -Rl [parentdevice] -dFor example:
# rmdev -Rl pci8 -dNow you should be able to remove the adapter via the HMC from the LPAR.
If you need to replace the adapter because it is broken and needs to be replaced, then you need to power down the PCI slot in which the adapter is placed:
After issuing the "rmdev" command, run diag and go into "Task Selection", "Hot Plug Task", "PCI Hot Plug Manager", "Replace/Remove a PCI Hot Plug Adapter". Select the adapter and choose "remove".
After the adapter has been replaced (usually by an IBM technician), run cfgmgr again to make the adapter known to the LPAR.
Topics: AIX, Networking, System Admin↑
SCP Stalls
When you encounter an issue where ssh through a firewall works perfectly, but when doing scp of large files (for example mksysb images) the scp connection stalls, then there's a solution to this problem: Add "-l 8192" to the scp command.
The reason for scp to stall, is because scp greedily grabs as much bandwith of the network as possible when it transfers files, any delay caused by the network switch of the firewall can easily make the TCP connection stalled.
Adding the option "-l 8192" limits the scp session bandwith up to 8192 Kbit/second, which seems to work safe and fast enough (up to 1 MB/second):
# scp -l 8192 SOURCE DESTINATION
The "Integrated Virtual Ethernet" or IVE adapter is an adapter directly on the GX+ bus, and thus up to 3 times faster dan a regular PCI card. You can order Power6 frames with different kinds of IVE adapters, up to 10GB ports.
The IVE adapter acts as a layer-2 switch. You can create port groups. In each port group up to 16 logical ports can be defined. Every port group requires at least 1 physical port (but 2 is also possible). Each logical port can have a MAC address assigned. These MAC addresses are located in the VPD chip of the IVE. When you replace an IVE adapters, LPARS will get new new MAC addresses.
Each LPAR can only use 1 logical port per physical port. Different LPARs that use logical ports from the same port group can communicate without any external hardware needed, and thus communicate very fast.
The IVE is not hot-swappable. It can and may only be replaced by certified IBM service personnel.
First you need to configure an HAE adapter; not in promiscues mode, because that is meant to be used if you wish to assign a physical port dedicated to an LPAR. After that, you need to assign a LHAE (logical host ethernet adapter) to an LPAR. The HAE needs to be configured, and the frame needs to be restarted, in order to function correctly (because of the setting of multi-core scaling on the HAE itself).
So, to conclude: You can assign physical ports of the IVE adapter to separate LPARS (promiscues mode). If you have an IVE with two ports, up to two LPARS can use these ports. But you can also configure it as an HAE and have up to 16 LPARS per physical port in a port group using the same interface (10Gb ports are recommended). There are different kinds of IVE adapters; some allow to create more port groups and thus more network connectivity. The IVE is a method of virtualizing ethernet without the need for VIOS.
Displaying results: 251 - 260.


