This next piece describes how to configure the storage multi-pathing software on a Red Hat Enterprise Linux 7 system. This is a required to install if you're using SAN storage and multiple paths are available to the storage (which is usually the case).
First, check if all required software is installed. It generally is, but it's good to check:
Next, check if the multipath daemon is running:# yum -y install device-mapper-multipath
If it is, stop it:# service multipathd status
Configure file /etc/multipath.conf, which is the configuration file for the multipath daemon:# service multipathd stop
This will create a default /etc/multipath.conf file, which will work quite well often, without any further configuration needed.# mpathconf --enable --with_multipathd y
Then start the multipath daemon:
You can now use the lsblk command to view the disks that are configured on the system.# service multipathd start Redirecting to /bin/systemctl start multipathd.service
This command should show that there have been mpathX devices created, which are the multipath devices managed by the multipath daemon, and you can now start using these mpathX disk devices as storage on the Red Hat system. Another way to check the mpath disk devices available on the system, is by looking at the /dev/mapper directory:# lsblk
If you have a clustered environment, where SAN storage devices are zoned and allocated to multiple systems, you may want to ensure that all the nodes in the cluster are using the same naming for the mpathX devices. That makes it easier to recognize which disk is which on each system.# ls -als /dev/mapper/mpath*
To ensure that all the nodes in the cluster use the same naming, first run a "cat /etc/multipath/bindings" command on all nodes, and identify which disks are shared on all nodes, and what the current naming of the mpathX devices on each system looks like. It may well be that the naming of the mpathX devices is already consistent on all cluster nodes.
If it is not, however, then copy file /etc/multipath/bindings from one server to all other cluster nodes. Be careful when doing this, especially when one or more servers in a cluster have more SAN storage allocated than others. Be sure that only those entries in /etc/multipath/bindings are copied over to all cluster nodes, where the entries represent shared storage on all cluster nodes. Any SAN storage allocated to just one server will show up in the /etc/multipath/bindings file for that server only, and it should not be copied over to other servers.
Once the file is correct on all cluster nodes. Restart the multipath daemon on each cluster node:
If you now do a "ls" in /dev/mapper on each cluster node, you'll see the same mpath names on all cluster nodes.# service multipathd stop # multipath -F # service multipathd start
Once this is complete, make sure that the multipath daemon is started at system boot time as well:
# systemctl enable multipathd
Here's how to register and un-register a Red Hat system through subscription-manager. You'll need to do this, for example, if you wish to do operating system updates on a Red Hat system.
First, here's how to unregister a system. This might come in handy if you do not have enough subscriptions in your Red Hat account, and temporarily want to move a valid subscription over to another system):
And here's how you register:# subscription-manager unregister System has been unregistered.
If you have issues registering a server, try removing all Red Hat subscription information first, and then register again, using the "auto-attach" option:# subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: [type your Red Hat username here] Password: [type your Red Hat password here] The system has been registered with ID: 3db39bee-bd48-46e8-9abc-9ba9
# subscription-manager clean
All local data removed
# subscription-manager list
+-------------------------------------------+
Installed Product Status
+-------------------------------------------+
Product Name: Red Hat Enterprise Linux Server
Product ID: 69
Version: 7.4
Arch: x86_64
Status: Unknown
Status Details:
Starts:
Ends:
# subscription-manager register --auto-attach
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: [type your Red Hat username here]
Password: [type your Red Hat password here]
The system has been registered with ID: 3db39bee-bd48-46e8-9abc-9ba9
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed
# subscription-manager list
+-------------------------------------------+
Installed Product Status
+-------------------------------------------+
Product Name: Red Hat Enterprise Linux Server
Product ID: 69
Version: 7.4
Arch: x86_64
Status: Subscribed
Status Details:
Starts: 12/27/2017
Ends: 12/26/2020
If you wish to use a specific Red Hat subscription, then you may first check for the available Red Hat subscriptions, by running:
# subscription-manager list --available --allIn the output of the command above, you will see, if any subscriptions are available, a Pool ID. You can use that Pool ID to attach a specific subscription to the system, for example, by running:
# subscription-manager attach pool=8a85f98c6267d2d90162734a700467b2
The following procedure describes how to set up a bonded network interface on Red Hat Enterprise Linux. It assumes that you already have a working single network interface, and wish to now move the system to a bonded network interface set-up, to allow for network redundancy, for example by connecting two separate network interfaces, preferably on two different network cards in the server, to two different network switches. This provides both redundancy, should a network card in the server fail, and also if a network switch would fail.
First, log in as user root on the console of the server, as we are going to change the current network configuration to a bonded network configuration, and while doing so, the system will lose network connectivity temporarilty, so it is best to work from the console.
In this procedure, We'll be using network interfaces em1 and p3p1, on two different cards, to get card redundancy (just in case one of the
network cards will fail).
Let's assume that IP address 172.29.126.213 is currently configured on network interface em1. You can verify that, by running:
Also, we'll need to verify, using the ethtool command, that there is indeed a good link status on both the em1 and p3p1 network interfaces:# ip a s
Run, to list the bonding module info (should be enabled by default already, so this is just to verify):# ethtool em1 # ethtool p3p1
Create copies of the current network files, just for safe-keeping:# modinfo bonding
Now, create a new file ifcfg-bond0 in /etc/sysconfig/network-scripts. We'll configure the IP address of the system (the one that was configured previously on network interface em1) on a new bonded network interface, called bond0. Make sure to update the file with the correct IP address, gateway and network mask for your environment:# cd /etc/sysconfig/network-scripts # cp ifcfg-em1 /tmp # cp ifcfg-p3p1 /tmp
The next thing to do is to create two more files, one for each network interfaces that will be the slaves of the bonded network interface. In our example, that will be em1 and p3p1.# cat ifcfg-bond0 DEVICE=bond0 TYPE=Bond NAME=bond0 BONDING_MASTER=yes BOOTPROTO=none ONBOOT=yes IPADDR=172.29.126.213 NETMASK=255.255.255.0 GATEWAY=172.29.126.1 BONDING_OPTS="mode=5 miimon=100" DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=no
Create file /etc/sysconfig/network-scripts/ifcfg-em1 (be sure to update the file to your environment, for example, use the correct UUID. You may find that in the copies you've made of the previous network interface files. In this file, you'll also specify that the bond0 interface is now the master.
Create file ifcfg-p3p1:# cat ifcfg-em1 TYPE=Ethernet BOOTPROTO=none NAME=em1 UUID=cab24cdf-793e-4aa7-a093-50bf013910db DEVICE=em1 ONBOOT=yes MASTER=bond0 SLAVE=yes
Now, we're ready to start using the new bonded network interface. Restart the network service:# cat ifcfg-p3p1 TYPE=Ethernet BOOTPROTO=none NAME=p3p1 UUID=5017c829-2a57-4626-8c0b-65e807326dc0 DEVICE=p3p1 ONBOOT=yes MASTER=bond0 SLAVE=yes
Run the ip command to check the current network config:# systemctl restart network.service
The IP address should now be configured on the bond0 interface.# ip a s
Ping the default gateway, to test if your bonded network interface can reach the switch. In our example, the default gateway is set to 172.29.126.1:
This should work. If not, re-trace the steps you've done so far, or work with your network team to identify the issue.# ping 172.29.126.1
Check that both interfaces of the bonded interface are up, and what the current active network interface is. You can do this by looking at file /proc/net/bonding/bond0. In this file you can see what the currently active slave is, and if all slaves of the bonded network interface are up. For example:
In the example above, the active network interface is p3p1. Let's bring it down, to see if it fails over to network interface em1. You can bring down a network interface using the ifdown command:# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: transmit load balancing Primary Slave: None Currently Active Slave: p3p1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: p3p1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0a:f7:ce:26:30 Slave queue ID: 0 Slave Interface: em1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0a:f7:bd:b7:9e Slave queue ID: 0
Again, look at the /proc/net/bonding/bond0 file. You can now see that the active network interface has changed to em1, and that network interface p3p1 is no longer listed in the file (because it is down):# ifdown p3p1 Device 'p3p1' successfully disconnected.
Now ping the default gateway again, and make sure it still works (now that we're using network interface em1 instead of network interface p3p1).# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: transmit load balancing Primary Slave: None Currently Active Slave: em1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: em1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0a:f7:bd:b7:9e Slave queue ID: 0
Then bring the p3p1 interface back up, using the ifup command:
And check the bonding status again:# ifup p3p1
It should show that the active network interface is still on em1, it will not fail back to network interface p3p1 (After all, why would it?! Network interface em1 works just fine).# cat /proc/net/bonding/bond0
Now repeat the same test, by bringing down network interface em1, ping the default gateway again, and check the bonding status, and bring em1 back up:
If this all works fine, then you're all set.# ifdown em1 # cat /proc/net/bonding/bond0 # ping 172.29.126.1 # ifup em1 # cat /proc/net/bonding/bond0 # ping 172.29.126.1
Configuring NTP on CentOS 6 (and similar versions) involves a number of steps - especially if you want to have it configured right and secure. Here's a quick guide how to do it:
First of all you have to determine the IP addresses of the NTP servers you are going to use. You may have to contact your network administrator to find out. Ensure that you get at least two time server IP addresses to use.
Then, install and verify the NTP packages:
Edit file /etc/ntp.conf and ensure that option "broadcastclient" is commented out (which it is by default with a new installation).# yum -y install ntp ntpdate # yum -q ntp ntpdate
Enable ntp and ntpdate at system boot time:
Ensure that file /etc/ntp/step-tickers is empty. This will make sure that if ntpdate is run, that it will use one of the time servers configured in /etc/ntp.conf.# chkconfig ntpd on # chkconfig ntpdate on
Add two time servers to /etc/ntp.conf, or use any of the pre-configured time servers in this file. Comment out the pre-configured servers, if you are using your own time servers.# cp /dev/null /etc/ntp/step-tickers
Do not copy the example above. Use the IP addresses for each time server that you've received from your network administrator instead.#server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server 1.2.3.4 server 5.6.7.8
Enable NTP slewing (for slow time stepping if the time on the server is off, instead of suddenly making big time jump changes), by adding "-x" to OPTIONS in /etc/sysconfig/ntpd. Also add "SYNC_HWCLOCK=yes" in /etc/sysconfig/ntpdate to synchronize the hardware clock with any time changes.
Stop the NTP service, if it is running:
Start the ntpdate service (this will synchronize the system clock and the hardware clock):# service ntpd stop
Now, start the time service:# service ntpdate start
Wait a few minutes for the server to synchronize its time with the time servers. This may take anywhere between a few and 15 minutes. Then check the status of the time synchronization:# service ntpd start
The asterisk in front of the time server name in the "ntpq -p" output indicates that the client has reached time synchronization with that particular time server.# ntpq -p # ntpstat
Done!
Security Enhanced Linux, or short SELinux, is by default enabled on Red Hat Enterprise (and alike) Linux systems.
To determine the status of SELinux, simply run:
There will be times when it may be necessary to disable SELinux. Or for example, when a Linux system is not Internet facing, you may not need to have SELinux enabled.# sestatus
From the command line, you can edit the /etc/sysconfig/selinux file. This file is a symbolic link to file /etc/selinux/config.
By default, option SELINUX will be set to enforcing in this file:
By changing it to "permissive", you will disable SELinux:# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing
SELINUX=permissive
Red Hat Enterprise Linux 7 and similar Linux distrobutions have a new command to set the hostname of the system easily. The command is hostnamectl. For example, to set the hostname of a RHEL 7 system to "flores", run:
The hostnamectl command provides some other interesting features.# hostnamectl set-hostname flores
For example, it can be used to set the deployment type of the system, for example "development" or "production" or anything else you like to give it (as long as it's a single word. You can do so, for example by setting it to "production", by running:
Another option is to set the location of the system (and here you can use multiple words):# hostnamectl set-deployment production
To retrieve all this information, use hostnamectl as well to query the status:# hostnamectl set-location "third floor rack A12 U24"
root@(enemigo) selinux # hostnamectl status
Static hostname: flores
Icon name: computer-laptop
Chassis: laptop
Deployment: production
Location: third floor rack A12 U24
Machine ID: 4d8158f54d5166ff374bb372599351c4
Boot ID: ae8e7dccf14a492984fb5462c4da2aa2
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-693.2.2.el7.x86_64
Architecture: x86-64
A Red Hat Enterprise Linux system should have a single default gateway defined. However, sometimes, it does occur that a system has multiple default gateways. Here's information to detect multiple default gateways and how to get rid of them:
First, check the number of default gateways defined, by running the netstat command and looking for entries that start with 0.0.0.0:
In the example above, there are 2 default gateway entries, one to 192.168.0.1, and another one to 192.168.1.1.# netstat -nr | grep ^0.0.0.0 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 em1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 em2
Quite often, more than 1 default gateways will be defined on a RHEL system, if there are multiple network interfaces present, and a GATEWAY entry is defined in each of the network interface files in /etc/sysconfig/network-script/ifcfg-*:
On a system with multiple network interfaces, it is best to define the default gateway in file /etc/sysconfig/network instead. This file is global network file. Put the following entries in this file, assuming your default gateway is 192.168.0.1 and the network interface to be used for the default gateway is em1:# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-* ifcfg-em1:GATEWAY=192.168.0.1 ifcfg-em2:GATEWAY=192.168.1.1
Next, remove any GATEWAY entries in any of the ifcfg-* files in /etc/sysconfig/network-scripts.GATEWAY=192.168.0.1 GATEWAYDEV=em1
Finally, restart the network service:
This should resolve multiple default gateways, and the output of the netstat command should now only show one single entry with 0.0.0.0.# service network restart
Note: If the netstat command is not available on the system, you may also determine the number of default gateways, by running:
# ip route show | grep ^default
Topics: Red Hat / Linux, System Admin↑
Incrond
Incron is an interesting piece of software for Linux, that can monitor for file changes in a specific folder, and can act upon those file changes. For example, it's possible to wait for files to be written in a folder, and have a command run to process these files.
Incron is not installed by default and is part of the EPEL repository. For Red Hat and CentOS 7, it's also possible to just download the RPM package from https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/i/incron-0.5.12-11.el7.x86_64.rpm, for example using wget.
To install incron, run:
There are 4 files important for incron:# yum -y install /path/to/incron*rpm
- /etc/incron.conf - The main configuration file for incron, but this file can be left configured as default.
- /usr/sbin/incrond - This is the incron daemon that will have to run for incron to work. You can simply start it by executing this command, and it will automatically run in the background. When it's no longer needed, you can simply kill the process of /usr/sbin/incrond. However, its better to enable the service as system boot time and start the service:
# systemctl enable incrond.service # service incrond start
- /var/log/cron - This is the default location where the incron daemon will log its activities (through rsyslog). The file is also used by the cron daemon, so you may see other messages in this file. By using the tail command on this file, you can monitor what the incron daemon is doing. For example:
# tail -f /var/log/cron
- The incrontab file - You can edit this file by running:
This command will automatically load the incrontab file in an editor like VI, and you can add/modify/remove entries this way. Once you save the file, its contents will be automatically activated by the incron daemon. To list the entries in the incrontab file, run:# incrontab -e
# incrontab -l
[path] [mask] [command]
Where:
- [path] is the folder that the incron daemon will be monitoring for any new files (only in the folder itself, not in any sub-folders).
- [mask] is the activity that the incron daemon should respond to. There are several different available activities to choose from. For a list of options, see https://linux.die.net/man/5/incrontab. One option that can be used is "IN_CLOSE_WRITE", which means, act if a file is closed for writing, meaning, writing to a file in the folder has been completed.
- [command] is the command to be run by the incron daemon when a file activity takes place in the monitored path. For this command you can use available wildcards, such as:
- $@ : watched filesystem path
- $# : event-related file name
You can have multiple entries in the incrontab file, each on a separate line. In the example above, the incron daemon will start script /path/to/script.bash with two parameters (the path of the monitored folder, and the name of the file that was written to the folder), for each file that has been closed for writing in folder /path/to/my/folder./path/to/my/folder IN_CLOSE_WRITE /path/to/script.bash $@ $#
To monitor the status of the incron daemon, run:
To restart the incron daemon, run:# service incrond status
Or shorter:# service incrond stop # service incrond start
There is a downside to using incron, which is, that there is no way to limit the number of processes that can be started by the incron daemon. If a thousand files are written to the folder monitored by the incron daemon, then it will kick off the defined proces in the incrontab file for that folder a thousand times. This may place some serious CPU load on a system (or even hang up the system), especially if the command being run is CPU and/or memory intensive.# service incrond restart
This is a quick NFS configuration using RHEL without too much concerts about security or any fine tuning and access control. In our scenario, there are two hosts:
- NFS Server, IP 10.1.1.100
- NFS Client, IP 10.1.1.101
On the NFS server, un the below commands to begin the NFS server installation:
Next, for this procedure, we export an arbitrary directory called /opt/nfs. Create /opt/nfs directory:[nfs-server] # yum install nfs-utils rpcbind
Edit the /etc/exports file (which is the NFS exports file) to add the below line to export folder /opt/nfs to client 10.1.1.101:[nfs-server] # mkdir -p /opt/nfs
Next, make sure to open port 2049 on your firewall to allow client requests:/opt/nfs 10.1.1.101(no_root_squash,rw)
Start the rpcbind and NFS server daemons in this order:[nfs-server] # firewall-cmd --zone=public --add-port=2049/tcp --permanent [nfs-server] # firewall-cmd --reload
Check the NFS server status:[nfs-server] # service rpcbind start; service nfs start
[nfs-server] # service nfs status
Redirecting to /bin/systemctl status nfs.service
nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled;
vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
order-with-mounts.conf
Active: active (exited) since Tue 2017-11-14 09:06:21 CST; 1h 14min ago
Main PID: 2883 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Next, export all the file systems configured in /etc/exports:
And check the currently exported file systems:[nfs-server] # exportfs -rav
Next, continue with the NFS client:[nfs-server] # exportfs -v
Install the required packages:
Create a mount point directory on the client, for example /mnt/nfs:[nfs-client] # yum install nfs-utils rpcbind [nfs-client]# service rpcbind start
Discover the NFS exported file systems:[nfs-client] # mkdir -p /mnt/nfs
Mount the previously NFS exported /opt/nfs directory:[nfs-client] # showmount -e 10.1.1.100 Export list for 10.1.1.100: /opt/nfs 10.1.1.101
Test the correctness of the setup between the NFS server and the NFS client by creating a file in the NFS mounted directory on the client side:[nfs-client] # mount 10.1.1.100:/opt/nfs /mnt/nfs
Move to the server side and check if the testfile file exists:[nfs-client] # cd /mnt/nfs/ [nfs-client] # touch testfile [nfs-client] # ls -l total 0 -rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
At this point it is working, but it is not set up to remain there permanently (as in: it will be gone when either the NFS server or NFS client is rebooted. To ensure it remains working even after a reboot, perform the following steps:[nfs-server] # cd /opt/nfs/ [nfs-server] # ls -l total 0 -rw-r--r--. 1 root root 0 Dec 11 08:13 testfile
On the NFS server side, to have the NFS server service enabled at system boot time, run:
On the NFS server client side, add an entry to the /etc/fstab file, that will ensure the NFS file system is mounted at boot time:[nfs-server] # systemctl enable nfs-server
The options for the NFS file systems are as follows:10.1.1.100:/opt/nfs /mnt/nfs nfs4 soft,intr,nosuid 0 0
- soft = No hard mounting, avoids hanging file access commands on the NFS client, if the NFS servers is unavailable.
- intr = Allow NFS requests to be interrupted if the NFS server goes down or can't be reached.
- nosuid = This prevents remote users from gaining higher privileges by running a setuid program.
This will tell you the established connections for each of the clients, for example:[nfs-server] # netstat -an | grep 10.1.1.100:2049
In the example above you can see that IP address 10.1.1.101 on port 757 (NFS client) is connected to port 2049 on IP address 10.1.1.100 (NFS server).tcp 0 0 10.1.1.100:2049 10.1.1.101:757 ESTABLISHED
Topics: Red Hat / Linux, System Admin↑
Watch
On Linux, you can use the watch command to run a specific command repeatedly, and monitor the output.
Watch is a command-line tool, part of the Linux procps and procps-ng packages, that runs the specified command repeatedly and displays the results on standard output so you can watch it change over time. You may need to encase the command in quotes for it to run correctly.
For example, you can run:
# watch "ps -ef | grep bash"The "-d" argument can be used to highlight the differences between each iteration, for example to highlight the time changes in the ntptime command:
# watch -d ntptimeBy default, the command is run every two seconds, although this is adjustable with the "-n" argument. For example, to run the uptime command every second:
# watch -n 1 uptime


