Tech Blog

These are blog entries written by the UNIX Health Check development team. Our team has extensive technical experience on both AIX and Red Hat systems, and we like to share our knowledge with our visitors.

Topics: Red Hat / Linux, Storage

RHEL 7: Set up storage multi-pathing

This next piece describes how to configure the storage multi-pathing software on a Red Hat Enterprise Linux 7 system. This is a required to install if you're using SAN storage and multiple paths are available to the storage (which is usually the case).

First, check if all required software is installed. It generally is, but it's good to check:

# yum -y install device-mapper-multipath
Next, check if the multipath daemon is running:
# service multipathd status
If it is, stop it:
# service multipathd stop
Configure file /etc/multipath.conf, which is the configuration file for the multipath daemon:
# mpathconf --enable --with_multipathd y
This will create a default /etc/multipath.conf file, which will work quite well often, without any further configuration needed.

Then start the multipath daemon:
# service multipathd start
Redirecting to /bin/systemctl start multipathd.service
You can now use the lsblk command to view the disks that are configured on the system.
# lsblk
This command should show that there have been mpathX devices created, which are the multipath devices managed by the multipath daemon, and you can now start using these mpathX disk devices as storage on the Red Hat system. Another way to check the mpath disk devices available on the system, is by looking at the /dev/mapper directory:
# ls -als /dev/mapper/mpath*
If you have a clustered environment, where SAN storage devices are zoned and allocated to multiple systems, you may want to ensure that all the nodes in the cluster are using the same naming for the mpathX devices. That makes it easier to recognize which disk is which on each system.

To ensure that all the nodes in the cluster use the same naming, first run a "cat /etc/multipath/bindings" command on all nodes, and identify which disks are shared on all nodes, and what the current naming of the mpathX devices on each system looks like. It may well be that the naming of the mpathX devices is already consistent on all cluster nodes.

If it is not, however, then copy file /etc/multipath/bindings from one server to all other cluster nodes. Be careful when doing this, especially when one or more servers in a cluster have more SAN storage allocated than others. Be sure that only those entries in /etc/multipath/bindings are copied over to all cluster nodes, where the entries represent shared storage on all cluster nodes. Any SAN storage allocated to just one server will show up in the /etc/multipath/bindings file for that server only, and it should not be copied over to other servers.

Once the file is correct on all cluster nodes. Restart the multipath daemon on each cluster node:
# service multipathd stop
# multipath -F
# service multipathd start
If you now do a "ls" in /dev/mapper on each cluster node, you'll see the same mpath names on all cluster nodes.

Once this is complete, make sure that the multipath daemon is started at system boot time as well:
# systemctl enable multipathd

Topics: Red Hat / Linux, System Admin

Subscribing a Red Hat system

Here's how to register and un-register a Red Hat system through subscription-manager. You'll need to do this, for example, if you wish to do operating system updates on a Red Hat system.

First, here's how to unregister a system. This might come in handy if you do not have enough subscriptions in your Red Hat account, and temporarily want to move a valid subscription over to another system):

# subscription-manager unregister
System has been unregistered.
And here's how you register:
# subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: [type your Red Hat username here]
Password: [type your Red Hat password here]
The system has been registered with ID: 3db39bee-bd48-46e8-9abc-9ba9
If you have issues registering a server, try removing all Red Hat subscription information first, and then register again, using the "auto-attach" option:
# subscription-manager clean
All local data removed
# subscription-manager list

+-------------------------------------------+
    Installed Product Status
+-------------------------------------------+
Product Name:   Red Hat Enterprise Linux Server
Product ID:     69
Version:        7.4
Arch:           x86_64
Status:         Unknown
Status Details:
Starts:
Ends:

# subscription-manager register --auto-attach
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: [type your Red Hat username here]
Password: [type your Red Hat password here]
The system has been registered with ID: 3db39bee-bd48-46e8-9abc-9ba9

Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status:       Subscribed

# subscription-manager list

+-------------------------------------------+
    Installed Product Status
+-------------------------------------------+
Product Name:   Red Hat Enterprise Linux Server
Product ID:     69
Version:        7.4
Arch:           x86_64
Status:         Subscribed
Status Details:
Starts:         12/27/2017
Ends:           12/26/2020
If you wish to use a specific Red Hat subscription, then you may first check for the available Red Hat subscriptions, by running:
# subscription-manager list --available --all
In the output of the command above, you will see, if any subscriptions are available, a Pool ID. You can use that Pool ID to attach a specific subscription to the system, for example, by running:
# subscription-manager attach pool=8a85f98c6267d2d90162734a700467b2

Topics: Networking, Red Hat / Linux

Setting up a bonded network interface on RHEL 7

The following procedure describes how to set up a bonded network interface on Red Hat Enterprise Linux. It assumes that you already have a working single network interface, and wish to now move the system to a bonded network interface set-up, to allow for network redundancy, for example by connecting two separate network interfaces, preferably on two different network cards in the server, to two different network switches. This provides both redundancy, should a network card in the server fail, and also if a network switch would fail.
First, log in as user root on the console of the server, as we are going to change the current network configuration to a bonded network configuration, and while doing so, the system will lose network connectivity temporarilty, so it is best to work from the console.

In this procedure, We'll be using network interfaces em1 and p3p1, on two different cards, to get card redundancy (just in case one of the network cards will fail).

Let's assume that IP address 172.29.126.213 is currently configured on network interface em1. You can verify that, by running:

# ip a s
Also, we'll need to verify, using the ethtool command, that there is indeed a good link status on both the em1 and p3p1 network interfaces:
# ethtool em1
# ethtool p3p1
Run, to list the bonding module info (should be enabled by default already, so this is just to verify):
# modinfo bonding
Create copies of the current network files, just for safe-keeping:
# cd /etc/sysconfig/network-scripts
# cp ifcfg-em1 /tmp
# cp ifcfg-p3p1 /tmp
Now, create a new file ifcfg-bond0 in /etc/sysconfig/network-scripts. We'll configure the IP address of the system (the one that was configured previously on network interface em1) on a new bonded network interface, called bond0. Make sure to update the file with the correct IP address, gateway and network mask for your environment:
# cat ifcfg-bond0
DEVICE=bond0
TYPE=Bond
NAME=bond0
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.29.126.213
NETMASK=255.255.255.0
GATEWAY=172.29.126.1
BONDING_OPTS="mode=5 miimon=100"
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
The next thing to do is to create two more files, one for each network interfaces that will be the slaves of the bonded network interface. In our example, that will be em1 and p3p1.

Create file /etc/sysconfig/network-scripts/ifcfg-em1 (be sure to update the file to your environment, for example, use the correct UUID. You may find that in the copies you've made of the previous network interface files. In this file, you'll also specify that the bond0 interface is now the master.
# cat ifcfg-em1
TYPE=Ethernet
BOOTPROTO=none
NAME=em1
UUID=cab24cdf-793e-4aa7-a093-50bf013910db
DEVICE=em1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Create file ifcfg-p3p1:
# cat ifcfg-p3p1
TYPE=Ethernet
BOOTPROTO=none
NAME=p3p1
UUID=5017c829-2a57-4626-8c0b-65e807326dc0
DEVICE=p3p1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Now, we're ready to start using the new bonded network interface. Restart the network service:
# systemctl restart network.service
Run the ip command to check the current network config:
# ip a s
The IP address should now be configured on the bond0 interface.

Ping the default gateway, to test if your bonded network interface can reach the switch. In our example, the default gateway is set to 172.29.126.1:
# ping 172.29.126.1
This should work. If not, re-trace the steps you've done so far, or work with your network team to identify the issue.

Check that both interfaces of the bonded interface are up, and what the current active network interface is. You can do this by looking at file /proc/net/bonding/bond0. In this file you can see what the currently active slave is, and if all slaves of the bonded network interface are up. For example:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: p3p1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: p3p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:ce:26:30
Slave queue ID: 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
In the example above, the active network interface is p3p1. Let's bring it down, to see if it fails over to network interface em1. You can bring down a network interface using the ifdown command:
# ifdown p3p1
Device 'p3p1' successfully disconnected.
Again, look at the /proc/net/bonding/bond0 file. You can now see that the active network interface has changed to em1, and that network interface p3p1 is no longer listed in the file (because it is down):
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: em1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0a:f7:bd:b7:9e
Slave queue ID: 0
Now ping the default gateway again, and make sure it still works (now that we're using network interface em1 instead of network interface p3p1).

Then bring the p3p1 interface back up, using the ifup command:
# ifup p3p1
And check the bonding status again:
# cat /proc/net/bonding/bond0
It should show that the active network interface is still on em1, it will not fail back to network interface p3p1 (After all, why would it?! Network interface em1 works just fine).

Now repeat the same test, by bringing down network interface em1, ping the default gateway again, and check the bonding status, and bring em1 back up:
# ifdown em1
# cat /proc/net/bonding/bond0
# ping 172.29.126.1
# ifup em1
# cat /proc/net/bonding/bond0
# ping 172.29.126.1
If this all works fine, then you're all set.

Topics: Red Hat / Linux, System Admin

Configuring NTP on CentOS 6

Configuring NTP on CentOS 6 (and similar versions) involves a number of steps - especially if you want to have it configured right and secure. Here's a quick guide how to do it:

First of all you have to determine the IP addresses of the NTP servers you are going to use. You may have to contact your network administrator to find out. Ensure that you get at least two time server IP addresses to use.

Then, install and verify the NTP packages:

# yum -y install ntp ntpdate
# yum -q ntp ntpdate
Edit file /etc/ntp.conf and ensure that option "broadcastclient" is commented out (which it is by default with a new installation).

Enable ntp and ntpdate at system boot time:
# chkconfig ntpd on
# chkconfig ntpdate on
Ensure that file /etc/ntp/step-tickers is empty. This will make sure that if ntpdate is run, that it will use one of the time servers configured in /etc/ntp.conf.
# cp /dev/null /etc/ntp/step-tickers
Add two time servers to /etc/ntp.conf, or use any of the pre-configured time servers in this file. Comment out the pre-configured servers, if you are using your own time servers.
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 1.2.3.4
server 5.6.7.8
Do not copy the example above. Use the IP addresses for each time server that you've received from your network administrator instead.

Enable NTP slewing (for slow time stepping if the time on the server is off, instead of suddenly making big time jump changes), by adding "-x" to OPTIONS in /etc/sysconfig/ntpd. Also add "SYNC_HWCLOCK=yes" in /etc/sysconfig/ntpdate to synchronize the hardware clock with any time changes.

Stop the NTP service, if it is running:
# service ntpd stop
Start the ntpdate service (this will synchronize the system clock and the hardware clock):
# service ntpdate start
Now, start the time service:
# service ntpd start
Wait a few minutes for the server to synchronize its time with the time servers. This may take anywhere between a few and 15 minutes. Then check the status of the time synchronization:
# ntpq -p
# ntpstat
The asterisk in front of the time server name in the "ntpq -p" output indicates that the client has reached time synchronization with that particular time server.

Done!

Topics: Red Hat / Linux, Security, System Admin

Disabling SELinux

Security Enhanced Linux, or short SELinux, is by default enabled on Red Hat Enterprise (and alike) Linux systems.

To determine the status of SELinux, simply run:

# sestatus
There will be times when it may be necessary to disable SELinux. Or for example, when a Linux system is not Internet facing, you may not need to have SELinux enabled.

From the command line, you can edit the /etc/sysconfig/selinux file. This file is a symbolic link to file /etc/selinux/config.

By default, option SELINUX will be set to enforcing in this file:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
By changing it to "permissive", you will disable SELinux:
SELINUX=permissive

Topics: Red Hat / Linux, System Admin

Setting the hostname in RHEL 7

Red Hat Enterprise Linux 7 and similar Linux distrobutions have a new command to set the hostname of the system easily. The command is hostnamectl. For example, to set the hostname of a RHEL 7 system to "flores", run:

# hostnamectl set-hostname flores
The hostnamectl command provides some other interesting features.

For example, it can be used to set the deployment type of the system, for example "development" or "production" or anything else you like to give it (as long as it's a single word. You can do so, for example by setting it to "production", by running:
# hostnamectl set-deployment production
Another option is to set the location of the system (and here you can use multiple words):
# hostnamectl set-location "third floor rack A12 U24"
To retrieve all this information, use hostnamectl as well to query the status:
root@(enemigo) selinux # hostnamectl status
   Static hostname: flores
         Icon name: computer-laptop
           Chassis: laptop
        Deployment: production
          Location: third floor rack A12 U24
        Machine ID: 4d8158f54d5166ff374bb372599351c4
           Boot ID: ae8e7dccf14a492984fb5462c4da2aa2
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-693.2.2.el7.x86_64
      Architecture: x86-64

Topics: Networking, Red Hat / Linux, System Admin

RHEL: Delete multiple default gateways

A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:

First, check the number of default gateways defined, by running the netstat command and looking for entries that start with 0.0.0.0:

# netstat -nr | grep ^0.0.0.0
0.0.0.0     192.168.0.1     0.0.0.0    UG        0 0        0 em1
0.0.0.0     192.168.1.1     0.0.0.0    UG        0 0        0 em2
In the example above, there are 2 default gateway entries, one to 192.168.0.1, and another one to 192.168.1.1.

Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-*
ifcfg-em1:GATEWAY=192.168.0.1
ifcfg-em2:GATEWAY=192.168.1.1
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is 192.168.0.1 and the network interface to be used for the default gateway is em1:
GATEWAY=192.168.0.1
GATEWAYDEV=em1
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.

Finally, restart the network service:
# service network restart
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with 0.0.0.0.

Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default

Topics: Red Hat / Linux, System Admin

Incrond

Incron is an interesting piece of software for Linux, that can monitor for file changes in a specific folder, and can act upon those file changes. For example, it's possible to wait for files to be written in a folder, and have a command run to process these files.

Incron is not installed by default and is part of the EPEL repository. For Red Hat and CentOS 7, it's also possible to just download the RPM package from https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/i/incron-0.5.12-11.el7.x86_64.rpm, for example using wget.

To install incron, run:

# yum -y install /path/to/incron*rpm
There are 4 files important for incron:
  • /etc/incron.conf - The main configuration file for incron, but this file can be left configured as default.
  • /usr/sbin/incrond - This is the incron daemon that will have to run for incron to work. You can simply start it by executing this command, and it will automatically run in the background. When it's no longer needed, you can simply kill the process of /usr/sbin/incrond. However, its better to enable the service as system boot time and start the service:
    # systemctl enable incrond.service
    # service incrond start
    
  • /var/log/cron - This is the default location where the incron daemon will log its activities (through rsyslog). The file is also used by the cron daemon, so you may see other messages in this file. By using the tail command on this file, you can monitor what the incron daemon is doing. For example:
    # tail -f /var/log/cron
    
  • The incrontab file - You can edit this file by running:
    # incrontab -e
    
    This command will automatically load the incrontab file in an editor like VI, and you can add/modify/remove entries this way. Once you save the file, its contents will be automatically activated by the incron daemon. To list the entries in the incrontab file, run:
    # incrontab -l
    
There's a specific format to the entries in the incrontab file mentioned above, and the format looks like this:

[path] [mask] [command]

Where:
  • [path] is the folder that the incron daemon will be monitoring for any new files (only in the folder itself, not in any sub-folders).
  • [mask] is the activity that the incron daemon should respond to. There are several different available activities to choose from. For a list of options, see https://linux.die.net/man/5/incrontab. One option that can be used is "IN_CLOSE_WRITE", which means, act if a file is closed for writing, meaning, writing to a file in the folder has been completed.
  • [command] is the command to be run by the incron daemon when a file activity takes place in the monitored path. For this command you can use available wildcards, such as:
    • $@ : watched filesystem path
    • $# : event-related file name
An example of the incrontab file can be:
/path/to/my/folder IN_CLOSE_WRITE /path/to/script.bash $@ $#
You can have multiple entries in the incrontab file, each on a separate line. In the example above, the incron daemon will start script /path/to/script.bash with two parameters (the path of the monitored folder, and the name of the file that was written to the folder), for each file that has been closed for writing in folder /path/to/my/folder.

To monitor the status of the incron daemon, run:
# service incrond status
To restart the incron daemon, run:
# service incrond stop
# service incrond start
Or shorter:
# service incrond restart
There is a downside to using incron, which is, that there is no way to limit the number of processes that can be started by the incron daemon. If a thousand files are written to the folder monitored by the incron daemon, then it will kick off the defined proces in the incrontab file for that folder a thousand times. This may place some serious CPU load on a system (or even hang up the system), especially if the command being run is CPU and/or memory intensive.

Topics: Networking, Red Hat / Linux, Storage, System Admin

Quick NFS configuration on Red Hat

This is a quick NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario, there are two hosts:

  • NFS Server, IP 10.1.1.100
  • NFS Client, IP 10.1.1.101
First, start with the NFS server:

On the NFS server, un the below commands to begin the NFS server installation:
[nfs-server] # yum install nfs-utils rpcbind
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:
[nfs-server] # mkdir -p /opt/nfs
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client 10.1.1.101:
/opt/nfs 10.1.1.101(no_root_squash,rw)
Next, make sure to open port 2049 on your firewall to allow client requests:
[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent
[nfs-server] # firewall-cmd --reload
Start the rpcbind and NFS server daemons in this order:
[nfs-server] # service rpcbind start; service nfs start
Check the NFS server status:
[nfs-server] # service nfs status 
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; 
 vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           order-with-mounts.conf
   Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
 Main PID: 2883 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
[nfs-server] # exportfs -rav
And check the currently exported file systems:
[nfs-server] # exportfs -v
Next, continue with the NFS client:

Install the required packages:
[nfs-client] # yum install nfs-utils rpcbind
[nfs-client]# service rpcbind start
Create a mount point directory on the client, for example /mnt/nfs:
[nfs-client] # mkdir -p /mnt/nfs
Discover the NFS exported file systems:
[nfs-client] # showmount -e 10.1.1.100
Export list for 10.1.1.100:
/opt/nfs 10.1.1.101
Mount the previously NFS exported /opt/nfs directory:
[nfs-client] # mount 10.1.1.100:/opt/nfs /mnt/nfs
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:
[nfs-client] # cd /mnt/nfs/
[nfs-client] # touch testfile
[nfs-client] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
Move to the server side and check if the testfile file exists:
[nfs-server] # cd /opt/nfs/
[nfs-server] # ls -l
total 0
-rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:

On the NFS server side, to have the NFS server service enabled at system boot time, run:
[nfs-server] # systemctl enable nfs-server
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:
10.1.1.100:/opt/nfs  /mnt/nfs  nfs4  soft,intr,nosuid  0 0
The options for the NFS file systems are as follows:
  • soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
  • intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
  • nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
If you need to know on the NFS server side, which clients are using the NFS file system, you can use the netstat command, and search for both the NFS server IP address and port 2049:
[nfs-server] # netstat -an | grep 10.1.1.100:2049
This will tell you the established connections for each of the clients, for example:
tcp  0  0  10.1.1.100:2049  10.1.1.101:757  ESTABLISHED
In the example above you can see that IP address 10.1.1.101 on port 757 (NFS client) is connected to port 2049 on IP address 10.1.1.100 (NFS server).

Topics: Red Hat / Linux, System Admin

Watch

On Linux, you can use the watch command to run a specific command repeatedly, and monitor the output.

Watch is a command-line tool, part of the Linux procps and procps-ng packages, that runs the specified command repeatedly and displays the results on standard output so you can watch it change over time. You may need to encase the command in quotes for it to run correctly.

For example, you can run:

# watch "ps -ef | grep bash"
The "-d" argument can be used to highlight the differences between each iteration, for example to highlight the time changes in the ntptime command:
# watch -d ntptime
By default, the command is run every two seconds, although this is adjustable with the "-n" argument. For example, to run the uptime command every second:
# watch -n 1 uptime

Number of results found for topic Red Hat / Linux: 103.
Displaying results: 41 - 50.