There are two commands that can be used to add a route on an AIX system.
The first one is route, and can be used to temporarily add a route to an AIX system. Meaning, if the system is rebooted, after the route has been added, the route will be lost again after the reboot.
The second command is chdev -l inet0 that can be used to permanently add a route on an AIX system. When this command is used, the route will persist during reboots, as this command writes to information of the route in to the ODM of AIX.
Let's say you have a need to add a route on a system to network 10.0.0.0. And that network uses a netmask of 255.255.255.0 (or "24" for the short mask notation). Finally, the gateway that can be used to access this network is 192.168.0.1. Obviously, please adjust this to your own situation.
To temporarily add a route on a system for this network, use the following route command:
After running this command, you can use the netstat -nr command to confirm that the route indeed has been set up:# route add -net 10.0.0.0 -netmask 255.255.255.0 192.168.0.1
To remove that route again, simply change the route command from "add" to "delete":# netstat -nr | grep 192.168.0.1 172.30.224/24 192.168.0.1 UG 0 0 en1 - -
Again, confirm with the netstat -nr command that the route indeed has been removed.# route delete -net 10.0.0.0 -netmask 255.255.255.0 192.168.0.1
Now, as mentioned earlier, the route command will only temporarily (until the next reboot) add a route on the AIX system. To make things permanent, use the chdev command. This command takes the following form:
chdev -l inet0 -a route=net,-netmask,[your-netmask-goes-here],-static,[your-network-address-goes-here],[your-gateway-goes-here]
For example:
This time, again, you can confirm with the netstat -nr command that the route has been set up. But now you can also confirm that the route has been added to the ODM, by using this command:# chdev -l inet0 -a route=net,-netmask,255.255.255.0,-static,10.0.0.0, 192.168.0.1 inet0 changed
At this point, you can reboot the system, and you'll notice that the route is still there, by repeating the netstat -nr and lsattr -El inet0 commands.# lsattr -El inet0 -a route | grep 192.168.0.1 route net,-netmask,255.255.255.0,-static,10.0.0.0,192.168.0.1 Route True
To remove this permanent route from the AIX system, simply change the chdev command above from "route" to "delroute":
Finally, again confirm using the netstat -nr and lsattr -El inet0 commands that the route indeed has been removed.# chdev -l inet0 -a delroute=net,-netmask,255.255.255.0,-static,10.0.0.0, 192.168.0.1 inet0 changed
A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:
First, check the number of default gateways defined, by running the netstat command and looking for entries that start with 0.0.0.0:
In the example above, there are 2 default gateway entries, one to 192.168.0.1, and another one to 192.168.1.1.# netstat -nr | grep ^0.0.0.0 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 em1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 em2
Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is 192.168.0.1 and the network interface to be used for the default gateway is em1:# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-* ifcfg-em1:GATEWAY=192.168.0.1 ifcfg-em2:GATEWAY=192.168.1.1
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.GATEWAY=192.168.0.1 GATEWAYDEV=em1
Finally, restart the network service:
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with 0.0.0.0.# service network restart
Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default
Topics: Networking, System Admin↑
Ping tricks
A few trick for the ping command to thoroughly test your network connectivity and check how much time a ping request takes:
Increase the interval of the ping requests from the default 1 second to, for example, 10 ping requests every second by using the -i option. As a test, to ping to 192.168.0.1, 10 times a second, run:
You can also go up to 1/100th of a second:# ping -i .1 192.168.0.1
To increase the default packet size of 64 bites, use -s option, for example to ping 1 KB with every ping request, run:# ping -i .01 192.168.0.1
Or combine the -i and -s options:# ping -s 1024 192.168.0.1
# ping -s 1024 -i .01 192.168.0.1
This is a quick NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario, there are two hosts:
- NFS Server, IP 10.1.1.100
- NFS Client, IP 10.1.1.101
On the NFS server, un the below commands to begin the NFS server installation:
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:[nfs-server] # yum install nfs-utils rpcbind
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client 10.1.1.101:[nfs-server] # mkdir -p /opt/nfs
Next, make sure to open port 2049 on your firewall to allow client requests:/opt/nfs 10.1.1.101(no_root_squash,rw)
Start the rpcbind and NFS server daemons in this order:[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent [nfs-server] # firewall-cmd --reload
Check the NFS server status:[nfs-server] # service rpcbind start; service nfs start
[nfs-server] # service nfs status
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled;
vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
order-with-mounts.conf
Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
Main PID: 2883 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
And check the currently exported file systems:[nfs-server] # exportfs -rav
Next, continue with the NFS client:[nfs-server] # exportfs -v
Install the required packages:
Create a mount point directory on the client, for example /mnt/nfs:[nfs-client] # yum install nfs-utils rpcbind [nfs-client]# service rpcbind start
Discover the NFS exported file systems:[nfs-client] # mkdir -p /mnt/nfs
Mount the previously NFS exported /opt/nfs directory:[nfs-client] # showmount -e 10.1.1.100 Export list for 10.1.1.100: /opt/nfs 10.1.1.101
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:[nfs-client] # mount 10.1.1.100:/opt/nfs /mnt/nfs
Move to the server side and check if the testfile file exists:[nfs-client] # cd /mnt/nfs/ [nfs-client] # touch testfile [nfs-client] # ls -l total 0 -rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:[nfs-server] # cd /opt/nfs/ [nfs-server] # ls -l total 0 -rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
On the NFS server side, to have the NFS server service enabled at system boot time, run:
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:[nfs-server] # systemctl enable nfs-server
The options for the NFS file systems are as follows:10.1.1.100:/opt/nfs /mnt/nfs nfs4 soft,intr,nosuid 0 0
- soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
- intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
- nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
This will tell you the established connections for each of the clients, for example:[nfs-server] # netstat -an | grep 10.1.1.100:2049
In the example above you can see that IP address 10.1.1.101 on port 757 (NFS client) is connected to port 2049 on IP address 10.1.1.100 (NFS server).tcp 0 0 10.1.1.100:2049 10.1.1.101:757 ESTABLISHED
Iperf is a command-line tool that can be used to diagnose network speed related issues, or just simply determine the available network throughput.
Iperf measures the maximum network throughput a server can handle. It is particularly useful when experiencing network speed issues, as you can use Iperf to determine what the maximum throughput is for a server.
First, you'll need to install iperf.
For AIX:
Iperf is available from http://www.perzl.org/aix/index.php?n=Main.iperf. Download the RPM file, for example iperf-2.0.9-1.aix5.1.ppc.rpm to your AIX system. Next install it:
# rpm -ihv iperf-2.0.9-1.aix5.1.ppc.rpmFor Red Hat Enterprise Linux:
You'll first need to install EPEL, as Iperf is not available in the standard Red Hat repositories. For example for Red Hat 7 systems:
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmNext, you'll have to install Iperf itself:
# yum -y install iperfNow that you have Iperf installed, you can start testing the connection between two servers. So, you'll need to have at least two servers with Iperf installed.
On the server you wish to test, launch Iperf in server mode:
# iperf -sThat will the server in listening mode, and besides that, nothing happens. The output will look something like this:
On the other server, connect to the first server. For example, if your first server is at IP address 198.51.100.5, run:# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 4] local 198.51.100.5 port 5001 connected with 198.51.100.6 port 59700 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 9.76 GBytes 8.38 Gbits/sec
# iperf -c 198.51.100.5After about 10 seconds, you'll see output on your screen showing the amount of data transferred, and the available bandwidth. The output may look something like this:
You can run multiple tests while the server Iperf process is listening on the first server. When you've completed your test, you can CTRL-C the running server Iperf command.# iperf -c 198.51.100.5 ------------------------------------------------------------ Client connecting to 198.51.100.5, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 198.51.100.6 port 59700 connected with 198.51.100.5 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 9.76 GBytes 8.38 Gbits/sec
For more information, see the official Iperf site at iperf.fr.
Topics: AIX, Monitoring, Networking, Red Hat / Linux, Security, System Admin↑
Determining type of system remotely
If you run into a system that you can't access, but is available on the network, and have no idea what type of system that is, then there are few tricks you can use to determine the type of system remotely.
The first one, is by looking at the TTL (Time To Live), when doing a ping to the system's IP address. For example, a ping to an AIX system may look like this:
TTL (Time To Live) is a timer value included in packets sent over networks that tells the recipient how long to hold or use the packet before discarding and expiring the data (packet). TTL values are different for different Operating Systems. So, you can determine the OS based on the TTL value. A detailed list of operating systems and their TTL values can be found here. Basically, a UNIX/Linux system has a TTL of 64. Windows uses 128, and AIX/Solaris uses 254.# ping 10.11.12.82 PING 10.11.12.82 (10.11.12.82) 56(84) bytes of data. 64 bytes from 10.11.12.82 (10.11.12.82): icmp_seq=1 ttl=253 time=0.394 ms ...
Now, in the example above, you can see "ttl=253". It's still an AIX system, but there's most likely a router in between, decreasing the TTL with one.
Another good method is by using nmap. The nmap utility has a -O option that allows for OS detection:
Okay, so it isn't a perfect method either. We ran the nmap command above against an AIX 7.1 system, and it came back as AIX 5.3 instead. And sometimes, you'll have to run nmap a couple of times, before it successfully discovers the OS type. But still, we now know it's an AIX system behind that IP.# nmap -O -v 10.11.12.82 | grep OS Initiating OS detection (try #1) against 10.11.12.82 (10.11.12.82) OS details: IBM AIX 5.3 OS detection performed.
Another option you may use, is to query SNMP information. If the device is SNMP enabled (it is running a SNMP daemon and it allows you to query SNMP information), then you may be able to run a command like this:
By the way, the example for SNMP above is exactly why UNIX Health Check generally recommends to disable SNMP, or at least to dis-allow providing such system information trough SNMP by updating the /etc/snmpdv3.conf file appropriately, because this information can be really useful to hackers. On the other hand, your organization may use monitoring that relies of SNMP, in which case it needs to be enabled. But then you stil have the opportunity of changing the SNMP community name to something else (the default is "public"), which also limits the remote information gathering possibilities.# snmpinfo -h 10.11.12.82 -m get -v sysDescr.0 sysDescr.0 = "IBM PowerPC CHRP Computer Machine Type: 0x0800004c Processor id: 0000962CG400 Base Operating System Runtime AIX version: 06.01.0008.0015 TCP/IP Client Support version: 06.01.0008.0015"
As an AIX admin, you may not always know what switches a certain server is connected to. If you have Cisco switches, here's an interesting method to identify the switch your server is connected to.
First, run ifconfig to look up the interfaces that are in use:
Okay, so on this system, you have interfaces en0, en4 and en8 active. So, if you want to determine the switch en4 is connected to, run this command:# ifconfig -a | grep en | grep UP | cut -f1 -d: en0 en4 en8
After a while, it will display the following information:# tcpdump -nn -v -i en4 -s 1500 -c 1 'ether[20:2] == 0x2000' tcpdump: listening on en4, link-type 1, capture size 1500 bytes
11:40:14.176810 CDP v2, ttl: 180s, checksum: 692 (unverified)
Device-ID (0x01), length: 22 bytes: 'switch1.host.com'
Version String (0x05), length: 263 bytes:
Cisco IOS Software, Catalyst 4500 L3 Switch Software
(cat4500e-IPBASEK9-M), Version 12.2(52)XO, RELEASE SOFTWARE
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2009 by Cisco Systems, Inc.
Compiled Sun 17-May-09 18:51 by prod_rel_team
Platform (0x06), length: 16 bytes: 'cisco WS-C4506-E'
Address (0x02), length: 13 bytes: IPv4 (1) 111.22.33.44
Port-ID (0x03), length: 18 bytes: 'GigabitEthernet2/7'
Capability (0x04), length: 4 bytes: (0x00000029):
Router, L2 Switch, IGMP snooping
VTP Management Domain (0x09), length: 2 bytes: ''''
Native VLAN ID (0x0a), length: 2 bytes: 970
Duplex (0x0b), length: 1 byte: full
Management Addresses (0x16), length: 13 bytes: IPv4 (1)
111.22.33.44
unknown field type (0x1a), length: 12 bytes:
0x0000: 0000 0001 0000 0000 ffff ffff
47 packets received by filter
0 packets dropped by kernel
Note here that this will only work on Cisco switches, as it uses the Cisco Discovery Protocol (CDP).
The output above will help you determine, that en4 is connected to a network switch called 'switch1.host.com', with IP address '111.22.33.44', and that it is connected to port 'GigabitEthernet2/7' (most likely port 7 on blade 2 of this switch).
If you're running the same command on an Etherchannelled interface, keep in mind that it will only display the information of the active interface in the Etherchannel configuration. You may have to fail over the Etherchannel to a backup adapter, to determine the switch information for the backup adapter in the Etherchannel configuration.
If your LPAR has virtual Ethernet adapters, this will not work (the command will just hang). Instead, run the command on the VIOS instead.
Also note that you may need to run the command a couple of times, for tcpdump to discover the necessary information.
Another interesting way to use tcpdump is to discover what VLAN an network interface is connected to. For example, if you have 2 interfaces on an AIX system, and you would want to configure them in an Etherchannel, or you would want to use one of them as a production interface, and another as a standby interface. In that case, it is important to know that both interfaces are within the same VLAN. Obviously, you can ask your network team to validate, but it is also good to be able to validate on the host side. Also, you can just configure an IP address on it, and see if it will work. But for production systems, that may not always be possible.
The trick basically is, to run tcpdump on an interface, and check what network traffic can be discovered. For example, if you have 2 network interfaces, like these:
In this case, interface en0 uses IP address 10.27.18.64, and is within the 10.27.18.x subnet. Interface en1 uses IP address 10.27.130.10, and is within the 10.27.130.x subnet (assuming both interfaces use a subnet mask of 255.255.255.0).# netstat -ni | grep en[0,1] en0 1500 link#2 0.21.5e.c0.d0.12 1426632806 0 86513680 0 0 en0 1500 10.27.18 10.27.18.64 1426632806 0 86513680 0 0 en1 1500 link#3 0.21.5e.c0.d0.13 20198022 0 7426576 0 0 en1 1500 10.27.130 10.27.130.10 20198022 0 7426576 0 0
Now, if en0 is a production interface, and you would like to confirm that en1, the standby interface, can be used to fail over the production interface to, then you need to know that both of the interfaces are within the same VLAN. To determine that, for en1, run tcpdump, and check if any network traffic in the 10.27.18 subnet (used by en0) can be seen (press CTRL-C after seeing any such network traffic, to cancel the tcpdump command):
After seeing this, you know for sure that on interface en1, even though it has an IP address in subnet 10.27.130.x, network traffic for 10.27.18.x subnet can be seen, and thus that failing over the production interface IP address from en0 to en1 should work just fine.# tcpdump -i en1 -qn net 10.27.18 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on en1, link-type 1, capture size 96 bytes 07:27:25.842887 ARP, Request who-has 10.27.18.136 (ff:ff:ff:ff:ff:ff) tell 10.27.18.2, length 46 07:27:25.846134 ARP, Request who-has 10.27.18.135 (ff:ff:ff:ff:ff:ff) tell 10.27.18.2, length 46 07:27:25.917068 IP 10.27.18.2.1985 > 224.0.0.2.1985: UDP, length 20 07:27:25.931376 IP 10.27.18.3.1985 > 224.0.0.2.1985: UDP, length 20 ^C 24 packets received by filter 0 packets dropped by kernel
Topics: AIX, Networking, System Admin↑
Using iptrace
The iptrace command can be very useful to find out what network traffic flows to and from an AIX system.
You can use any combination of these options, but you do not need to use them all:
- -a Do NOT print out ARP packets.
- -s [source IP] Limit trace to source/client IP address, if known.
- -d [destination IP] Limit trace to destination IP, if known.
- -b Capture bidirectional network traffic (send and receive packets).
- -p [port] Specify the port to be traced.
- -i [interface] Only trace for network traffic on a specific interface.
Run iptrace on AIX interface en1 to capture port 80 traffic to file trace.out from a single client IP to a server IP:
# iptrace -a -i en1 -s clientip -b -d serverip -p 80 trace.outThis trace will capture both directions of the port 80 traffic on interface en1 between the clientip and serverip and sends this to the raw file of trace.out.
To stop the trace:
The ipreport command can be used to transform the trace file generated by iptrace to human readable format:# ps -ef|grep iptrace # kill
# ipreport trace.out > trace.report
Topics: AIX, Networking, System Admin↑
IP alias
To configure IP aliases on AIX:
Use the ifconfig command to create an IP alias. To have the alias created when the system starts, add the ifconfig command to the /etc/rc.net script.
The following example creates an alias on the en1 network interface. The alias must be defined on the same subnet as the network interface.
# ifconfig en1 alias 9.37.207.29 netmask 255.255.255.0 upThe following example deletes the alias:
# ifconfig en1 delete 9.37.207.29
Red hat Linux provides following tools to make changes to Network configuration such as add new card, assign IP address, change DNS server, etcetera:
- GUI tool (X windows required) - system-config-network
- Command line text based GUI tool (No X windows required) - system-config-network-tui
- Edit configuration files directly, stored in /etc/sysconfig/network-scripts directory
Editing the configuration files stored in /etc/sysconfig/network-scripts:
First change directory to /etc/sysconfig/network-scripts/:
# cd /etc/sysconfig/network-scripts/You need to edit / create files as follows:
- /etc/sysconfig/network-scripts/ifcfg-eth0 : First Ethernet card configuration file
- /etc/sysconfig/network-scripts/ifcfg-eth1 : Second Ethernet card configuration file
# vi ifcfg-eth0Append/modify as follows:
Save and close the file. Define the default gateway (router IP) and hostname in /etc/sysconfig/network file:# Intel Corporation 82573E Gigabit Ethernet Controller (Copper) DEVICE=eth0 BOOTPROTO=static DHCPCLASS= HWADDR=00:30:48:56:A6:2E IPADDR=10.251.17.204 NETMASK=255.255.255.0 ONBOOT=yes
Save and close the file. Restart networking:# vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=host.domain.com GATEWAY=10.251.17.1
# /etc/init.d/network restartMake sure you have correct DNS server defined in /etc/resolv.conf file. Try to ping the gateway, and other hosts on your network. Also check if you can resolv host names:
# nslookup host.domain.comAnd verify if the NTP servers are correct in /etc/ntp.conf, and if you can connect to the time server, by running the ntpdate command against one of the NTP servers:
# ntpdate 10.20.30.40This should synchronize system time with time server 10.20.30.40.


