Posts Tagged ‘Linux’

Configure HAProxy and Keepalived for Load Balancing and Reverse Proxy on Red Hat/Scientific/CentOS Linux 5/6

June 28, 2011 6 comments

HAProxy is an open source load balancer/reverse proxy that can provide high availability for your network services. While generally used for web services, it can also be used to provide more reliability for services such as SMTP and terminal services. In addition we can combine it with the Keepalived package to allow high availability/failover for the HAProxy server itself. HAProxy plus Keepalived can provide a good solution for high availability at a very low cost in comparison to proprietary hardware based load balancers.

In this example I will configure 2 HAProxy/Keepalived servers (lb1/lb2) that will direct traffic to 2 Apache web servers (web1/web2). I will not detail the set up of the web servers. Here is a list of the server and IP address configuration scheme:


Configure on both lb1/lb2

First we need to activate the Extra Packages for Enterprise Linux (EPEL) repository, which should host packages of the HAProxy and Keepalived software. Install EPEL repo info into YUM:

# rpm -ivh

Now we will install the HAproxy and Keepalived software:

# yum -y install haproxy keepalived

Configure Keepalived on lb1/lb2

Move the existing config because we will basically be starting from scratch.

# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

Edit the Keepalived config file and make it look something like below:

# nano /etc/keepalived/keepalived.conf

vrrp_script chk_haproxy {           # Requires keepalived-1.1.13
script "killall -0 haproxy"     # cheaper than pidof
interval 2                      # check every 2 seconds
weight 2                        # add 2 points of prio if OK
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101                    # 101 on master, 100 on backup
virtual_ipaddress {
track_script {

Make sure and change the priority under the vrrp_instance to 100 on the backup HAProxy server “lb2”. Other than that parameter the rest of the keepalived.conf file should be identical on both lb1 and lb2. Also if you want to add additional virtual IP addresses for additional services/servers in your network simply add an IP address under the virtual_ipaddress directive.

Now we need to configure the system to allow HAProxy to access shared virtual IP addresses. First make a backup of the sysctl.conf file:

# cp /etc/sysctl.conf /etc/sysctl.conf.bak

Now add this line to the end of the /etc/sysctl.conf file:

net.ipv4.ip_nonlocal_bind = 1

Now execute sysctl to apply the new parameter:

# sysctl -p

Extra configuration of iptables is required for keepalived, in particular we must enable support for multicast broadcast packets. If this doesn’t work you can always disable the iptables firewall for testing purposes. Of course this is not recommended in production environments!

Enable the multicast packets:

# iptables -I INPUT -d -j ACCEPT

You may need to add this rule for the VRRP IP protocol:

# iptables -I INPUT -p 112 -j ACCEPT

In addition insert a rule that will correspond with the traffic that you are load balancing, in my case HTTP:

# iptables -I INPUT -p tcp --dport 80 -j ACCEPT

Finally save the iptables config so it will be restored after restarting and start Keepalived:

# service iptables save
# service keepalived start

Check lb1

Now let’s check to see if Keepalived is listening on the virtual IP address that we specified:

# ip addr sh eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:50:56:87:00:0f brd ff:ff:ff:ff:ff:ff
inet brd scope global eth0
inet scope global eth0
inet6 fe80::250:56ff:fe87:f/64 scope link
valid_lft forever preferred_lft forever

Note that the virtual IP address is bound to network interface eth0.

Check lb2

# ip addr sh eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:50:56:87:00:22 brd ff:ff:ff:ff:ff:ff
inet brd scope global eth0
inet6 fe80::250:56ff:fe87:22/64 scope link
valid_lft forever preferred_lft forever

Note that the virtual IP address is not bound on the backup HAProxy server lb2. You can test failover by disconnecting lb1 from the network or shutting down the keepalived service on lb1.

Configure HAProxy on lb1/lb2

First make a backup of the HAProxy config file for good measure:

# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

Basically you will want to keep the global and default configuration sections intact. Remove or remark out the frontend and backend sections below them. The frontend/backend configuration is a more modern way to implement proxying, however I will perform the more traditional configuration and specify my options in one configuration section for each specific virtual IP address that I will proxy.

# nano /etc/haproxy/haproxy.cfg

listen webfarm
mode http
balance source
cookie JSESSIONID prefix
option httpchk HEAD /check.txt HTTP/1.0
option httpclose
option forwardfor
server web1 cookie A check
server web2 cookie B check

You can read about the different configuration options available on the HAProxy website. There is a plethora of them! Basically the “balance source” option will have each client connect to the same web server unless it fails. A variable will be inserted in a session cookie that will specify which server to direct a client to. Also HAProxy will check if a web server is available with “option httpchk” by inspecting a file on the web server root named check.txt.  You will want to create this file on each web server. The “option httpclose” and “option forwardfor” are needed to allow the web server to log the actual IP address source for each client request, otherwise the HAProxy server IP will be logged because it is proxying the connection.

Enable HAProxy and Keepalived to start automatically on system boot and start HAProxy:

# chkconfig haproxy on
# chkconfig keepalived on
# service haproxy start

Configure logging on web1/web2

This part is optional but there are some configuration changes needed on the web servers in order to have proper logging. In this example my web1/web2 servers are Apache and Red Hat/CentOS based.

# nano /etc/httpd/conf/httpd.conf

First since the HAProxy servers for available on the web server by requesting the check.txt file, we will want to disable logging of requests for this file. Otherwise all access to this file will be logged.

Remark out any lines that begin with “CustomLog”. Then at the end of the file add these lines which will prevent logging access to check.txt:

SetEnvIf Request_URI "^/check\.txt$" dontlog
CustomLog logs/access_log combined env=!dontlog

Additionally you will want to specify that Apache log the client IP address forwarded by HAProxy, not the IP of the HAProxy server. Remark out the existing LogFormat directive similar to the one commented below and substitute the next line with the “X-Forwarded-For” parameter. Please note that the second line is wrapped, it should be only a single line below the commented line.

#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

Hopefully that should do it. You can test by shutting down or disconnecting the network on either of your web servers.

For planned maintenance you can delete the check.txt file on the web server that you will be working on. With this configuration new client requests will go to the server that passes the check but existing client sessions should be maintained on the existing server with the check file removed. In this way you can have a scheme where you wait for client connections to be drained completely on the server with the check file removed, then you can perform maintenance. That way downtime is minimized and the user experience with sessions is maintained.

Good luck with HAProxy!


Categories: Linux Tags:

Compiz Fusion and Dell Inspiron 700m with Intel 855GM Video Chipset

June 23, 2011 4 comments

The Dell Inspiron 700m is a mid 2000’s vintage laptop with a Pentium M 1.6Ghz CPU and an Intel 855GM video adapter. While antiquated by today’s standards I have one as a spare and I find it still works reasonably well for basic web surfing and word processing. One downside with the age of the hardware is that the Intel graphics adapter is not supported for many graphics capabilities of modern OS’s such as Aero with Windows Vista/7. For me Windows 7 is not very pretty without Aero.

However, with a few minor issues I have found that the 700m is capable of supporting 3D acceleration and Compiz Fusion under Linux. For many laptops there have been issues with the Intel 855 adapter and the current development of the Intel graphics driver on Linux. As a result with Linux distros such as Ubuntu 10.04/10.10 the intel driver is not used by default and the fbdev driver is loaded instead. See the Ubuntu wiki for details on these issues and some possible resolutions:

In the case of the Inspiron 700m I have found that these workarounds were not needed with several Linux distributions. In particular, Fedora 14 seems to load the Intel driver and Compiz can be enabled without any tweaking after performing a standard GNOME based desktop install. One caveat is that I have not tested Fedora extensively, so there may be issues still that I am not aware of.

In addition, Ubuntu 10.10 is capable of Compiz desktop effects on the 700m. However, it will not work immediately after the initial GNOME desktop install. Here are the steps I needed to get Compiz working on Ubuntu 10.10:

Run a full system update. At the time of this writing the the system is updated to Linux kernel version 2.6.35-28 and the Intel driver (xserver-xorg-video-intel) version is 2:2.12.0-1. When completed reboot the system.

Create an xorg.conf file. By default this is not present on Ubuntu. However, it seems to be necessary for Compiz desktop effects to activate successfully even though no modifications to the file are necessary. To make the xorg.conf:

At the GNOME desktop press Ctrl-Alt-F1.

Login with your standard user account.

Shutdown the display manager, enter the user password again when prompted:

sudo /etc/init.d/gdm stop

Run X with configure switch to create the xorg.conf file:

sudo X -configure

Now copy the the X11 directory:

sudo cp /etc/X11/xorg.conf

Now reboot the system:

sudo reboot

After the reboot log back in and go to System > Preferences > Appearance, Visual Effects tab and choose Normal or Extra. Hopefully your Compiz effects should activate.

One issue I have seen is that there are some small artifacts on the window borders of inactivate windows occasionally inside GNOME. I have found that under the XFCE desktop environment these visual defects are not present. I have not tried using Emerald as the window decorator so perhaps that could solve the issue as well.

At the moment I have not found any way to get Compiz to work with the Intel 855GM under Ubuntu 11.04. It is my understanding that the version of Compiz in Ubuntu 11.04 requires OpenGL 1.4 and the Intel 855GM is only capable of OpenGL 1.3. I have tried downgrading the Compiz version on Ubuntu 11.04 but I was still not able to get compositing to work correctly, all kinds of visually artifacts would appear. Please note that this will also disable the Unity interface. If you’d like to give the downgrade a try, please see this guide:

Good luck with your Compiz adventure!

Categories: Linux, Ubuntu Tags: ,

Configure OpenSSH Public Key Encryption with Keychain for Passwordless SSH Logins

April 19, 2011 1 comment

Public key encryption is a powerful tool that you can use with SSH in order to achieve logins to remote hosts without entering a password everytime.  It can be much more secure than using simple password authentication.  It is also ideal for use with unattended scripting and automation when a password cannot be entered to authenticate.

In this example I will authenticate between two SSH systems, aptly named “client1” and “server1”.  Server1 is the host that we want to log in to without a password.  In this example I am using an RPM based Red Hat Enterprise/Centos/Scientific Linux system, but it should be similar for most other Linux distributions and Windows tools such as Cygwin.

First on client1 run the ssh-keygen utility to generate a public/private key pair.  On my system it will default to RSA type with 2048 bit encryption depth.  You will also want to enter a passphrase to decrypt your private key.  It is a good idea to make the passphrase very complex with numbers and caps included. Avoid the temptation to leave the passphrase blank because later we will set up keychain and the ssh-agent utility to avoid entering the passphrase every time we log in.

[aaron@client1 ~]$ ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/aaron/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/aaron/.ssh/id_rsa.
Your public key has been saved in /home/aaron/.ssh/

Now you will want to copy the user’s public key to the server we want to authentication in to.  One thing to note is that with the minimal server install of Red Hat Enterprise/Scientific Linux 6, the openssh-clients package does not get installed.  This will prevent us from using secure copy (SCP) to copy the public key to the server.  Since I will use SCP in the future I will make sure the clients package is installed on the Linux SSH servers.  You could run something like this from your client if on Red Hat/Scientific or just do it locally on the server.

[aaron@client1 ~]$ ssh root@server1 "yum -y install openssh-clients"

Now we will copy the public key over:

[aaron@client1 ~]$ scp ~/.ssh/ aaron@server1:

The authenticity of host 'server1 (' can't be established.
RSA key fingerprint is 5c:83...
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server1,' (RSA) to the list of known hosts.
aaron@server1's password:                                    100%  392     0.4KB/s   00:00

Now we need to login to the remote server.  We need to create the “~/.ssh” directory where we will place the public key in a special file to allow us to authenticate with it.  Unfortunately I wasn’t able to remotely chain commands together to make this directory automatically because SELinux denies this right and the openssh client software itself must create the directory.  So we have to login to the server and then run the ssh command to set it up the first time.

[aaron@client1 ~]$ ssh server1

aaron@server1's password:
Last login: Mon Mar 21 17:11:09 2011 from client1

Now run “ssh” and attempt to connect to any host to set up the ~/.ssh directory.  We can simply choose “no” when prompted if we want to continue, we don’t need to connect to a host to get the directory created.

[aaron@server1 ~]$ ssh localhost

The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 10:01...
Are you sure you want to continue connecting (yes/no)? no

Now copy the user public key into the authorized_keys file under “~/.ssh”, we also need to set the permissions on the file to allow only this user to read/write the file:

[aaron@server1 ~]$ cat >> ~/.ssh/authorized_keys
[aaron@server1 ~]$ chmod 600 ~/.ssh/authorized_keys

And we’ll now exit to go back to the client:

[aaron@server1 ~]$ exit
Connection to server1 closed.

Finally let’s test to make sure the server will use the user’s public/private key pair for encryption rather than passwords:

[aaron@client1 ~]$ ssh server1

Enter passphrase for key '/home/aaron/.ssh/id_rsa':
Last login: Tue Mar 22 14:40:52 2011 from client1

The prompt asked for the passphrase so it should now be using the public/private key pair.  Now exit again back to the client:

[aaron@server1 ~]$ exit

Keychain and SSH-Agent Configuration

Now we will use keychain along with ssh-agent to cache our decrypted private key in memory so that we won’t have to enter the passphrase each time we login with ssh to the server.  While there is a minor security risk in doing this it is much more secure than using a private key with no passphrase.

First ensure that the keychain package is installed, it should be available on most Linux environments (including Cygwin).  I am using Scientific Linux so for it or Red Hat/CentOS it should be available at RPMForge repository. I am on Scientific Linux 6/32-bit:

[aaron@client1 ~]$ su -c 'rpm -ivh'

And install the keychain package:

[aaron@client1 ~]$ su -c 'yum -y install keychain'

First test that keychain can access and launch the ssh-agent to decrypt the private key and load it into memory:

[aaron@client1 ~]$ /usr/bin/keychain ~/.ssh/id_rsa

* keychain 2.7.0 ~
* Starting ssh-agent...
* Starting gpg-agent...
* Adding 1 ssh key(s): /home/aaron/.ssh/id_rsa
Enter passphrase for /home/aaron/.ssh/id_rsa:
* ssh-add: Identities added: /home/aaron/.ssh/id_rsa

Now we need to tell the current session to use the ssh-agent if it is already running:

[aaron@client1 ~]$ source ~/.keychain/${HOSTNAME}-sh > /dev/null

That should do it.  You will now be able to authenticate without entering the passphrase everytime:

[aaron@client1 ~]$ ssh server1
Last login: Tue Mar 22 14:45:56 2011 from client1
[aaron@server1 ~]$

Now edit the “.bash_profile” file and append the content we entered manually above to run when we start a login session. This will prompt you to decrypt the RSA private key with the passphrase at the next login if the ssh-agent doesn’t have the key decrypted already in memory:

$ vi ~/.bash_profile

# Start Keychain
/usr/bin/keychain ~/.ssh/id_rsa
source ~/.keychain/${HOSTNAME}-sh > /dev/null

I have noticed that on Linux that the ssh-agent will keep running with the private key decrypted until the client is rebooted.  If the client is on Windows in an environment like Cygwin the ssh-agent will stay running until the user is fully logged out of their desktop session.


Categories: Linux Tags:

Install Samba Server on Red Hat Enterprise Linux/CentOS/Scientific Linux 6

March 26, 2011 7 comments

Recently the latest version of Scientific Linux 6 was released. Scientific Linux is a distribution which uses Red Hat Enterprise Linux as its upstream and aims to be compatible with binaries compiled for Red Hat Enterprise. I am really impressed with the quality of this distro and the timeliness with which updates and security fixes are distributed. Thanks to all the developers and testers on the Scientific Linux team!

In this post I will discuss installing Red Hat Enterprise Linux/CentOS/Scientific Linux 6 as a Samba server. The instructions should also be relevant to other Linux distros including CentOS. This example will rely on a local user database as the mechanism to provide security. In future posts I may discuss more complex scenarios including integrating the Samba server into Windows domains and Active Directory.

Let’s start off by installing the Samba server package and its dependencies:

# yum -y install samba

It is a good idea to set up a distinct group to allow access to the directory we will share. I will specify a group ID to prevent any overlap with the default groups created when individual users are added, which on most Linux distros these days start at 500 or 1000.

# groupadd -g 10000 fileshare

Now we will create a directory that will host our Samba share:

# mkdir /home/data

We need to modify the permissions on the directory to allow write access for users in our new group:

# chgrp fileshare /home/data
# chmod g+w /home/data


UPDATE (5/10/2011): Recently I was setting up a Samba share on an existing file system that already contained files and I was unable to get SELinux configured to allow Samba to function correctly. This occurred even with using the -R option specified below to re-curse and relabel the existing files. So be aware that you may have problems like I did and you may need to set SELinux to permissive or disabled in the “/etc/selinux/config” file. In my case there were no denials logged in the “/var/log/audit/audit.log” so it was very difficult to troubleshoot.

Now we need to modify SELinux to allow access privilege to our new Samba share. By default this is denied and users will be unable to write files to the share. Details of the SELinux configuration needed can be found in the default config file “/etc/samba/smb.conf”.

Here are some good references regarding SELinux:

Now run the SELinux config command to allow user access to the Samba share directory. New directories and files created under our Samba share directory will be automatically inherit the SELinux context of the parent directory.  Use the -R option with “chcon” to re-curse if there are existing files in the directory you are sharing:

# chcon -t samba_share_t /home/data

Now we will create a user to access the Samba share. The command options specify to add the user to a supplementary group “fileshare”, do not create a home directory, and set the login shell to “/sbin/nologin” to prevent logins to the console. We only want the user access to the Samba file share:

# useradd -G fileshare -u 1000 -M -s /sbin/nologin aaron

Assign a password to this user, although the user shouldn’t have any console login privileges:

# passwd aaron

Now we need to set up our Samba configuration file.  I will move the existing config file and create a fresh copy to be more concise. But don’t delete it, as it contains a good amount of documentation so it is a handy resource if you want to add directives later.

Move the existing file and edit the new file:

# mv /etc/samba/smb.conf /etc/samba/smb.conf.bak

# vi /etc/samba/smb.conf

Now edit the new “smb.conf” file and add parameters like this:

workgroup = WORKGROUP
server string = samba
security = user
passdb backend = tdbsam
load printers = no

comment = data directory
path = /home/data
writeable = yes
public = no

The “global” section contains directives that apply to the whole Samba instance. We can define the workgroup or domain this server is a member of, what security mechanism to use (user, share, domain), and the password database type “tdb”. The old “smbpasswd” password file is no longer recommended for use on new installations. The “load printers” directive I set to “no” because I won’t be using the CUPS printing system and connection refused errors will show up in “/var/log/messages” unless this is specified.

The 2nd section (and on if you have more than one share) has details on each Samba file share. In this case the share is named “data”, we can define if it is writeable, and “public” defines whether users not in the Samba password database can access the share.

We should test the parameters of the “smb.conf” file to make sure there are no errors:

# testparm

Once you’ve run the “testparm” command and received no errors in the output you should be set to go. You may notice that some of the parameters won’t show in the output, this is fine and indicates that some are the Samba default. We’ll now make the Samba password for the user we are adding:

# smbpasswd -a aaron
New SMB password:
Retype new SMB password:

I received a bunch of output after entering the password that you can see below. From what I can tell this not a problem and it printed a message at the bottom that the user was added. Later when I fired up Samba and connected to the share with this user everything worked normally.

tdbsam_open: Converting version 0.0 database to version 4.0.
tdbsam_convert_backup: updated /var/lib/samba/private/passdb.tdb file.
account_policy_get: tdb_fetch_uint32 failed for type 1 (min password length), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 2 (password history), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 3 (user must logon to change password), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 4 (maximum password age), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 5 (minimum password age), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 6 (lockout duration), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 7 (reset count minutes), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 8 (bad lockout attempt), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 9 (disconnect time), returning 0
account_policy_get: tdb_fetch_uint32 failed for type 10 (refuse machine password change), returning 0
Added user aaron.

To confirm that the user was added to the Samba tdb database use the “pdbedit” command:

# pdbedit -w -L

Now we need to make changes to the “iptables” firewall startup config file. Backup the file and edit:

# cp /etc/sysconfig/iptables /etc/sysconfig/iptables.bak

# vi /etc/sysconfig/iptables

Add the first line accepting packets on TCP/445. Be sure and add it above the last line of the “input” chain with the “Reject” target, that way the rule will be processed.

-A INPUT -p tcp --dport 445 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited

Now you can edit the “smb” daemon to start automatically, then start “smb”:

# chkconfig smb on
# service smb start

If you now switch over to a Samba/SMB client you should now be able to map a drive or browse the shares on the Samba server. If you want to browse the shares available you will need to manually enter something like “\\server1” or “\\” without quotes in the address bar of Windows Explorer, the server won’t appear in Network Places. To enable full network browsing more configuration would be needed and you would probably need to enable the “nmb” daemon.

Happy Samba’ing!


Categories: Linux, Samba, Samba Tags: ,

Install Open Source VMware Tools on Red Hat Enterprise/CentOS/Scientific Linux 6

March 24, 2011 31 comments

VMware now makes a repository available for us to install the VMware tools for a variety of Linux distributions including Red Hat, Scientific, CentOS, and Ubuntu.  In this example I will install VMware tools on a Red Hat Enterprise/CentOS/Scientific Linux 6 guest running on a VMware ESXi 4.1 host.

First import the VMware repository GPG signing public keys:

# rpm --import
# rpm --import

Now add the VMware repository.  If you’d like you can use the “echo” command below or simply create the file and its contents are listed below it.  There are other packages available in the repository for other Linux distros, architectures, and ESX host versions.  Again I am using the Red Hat Enterprise 6/VMware ESXi 4.1 version.

# echo -e "[vmware-tools]\nname=VMware Tools\nbaseurl=\
/tools/esx/4.1latest/rhel6/\$basearch\nenabled=1\ngpgcheck=1" > /etc/yum.repos.d\

Now we can list the contents of the new repo file:

[root@server1 ~]# cat /etc/yum.repos.d/vmware-tools.repo

Here is what the contents should look like:

name=VMware Tools

It is now time to run the actual install of VMware tools.  In my case I am installing on a server system without X11 graphical interface so this is the minimum install:

# yum -y install vmware-open-vm-tools-nox

If you are installing on a workstation or server with X11 installed and would like the VMware display adapter and mouse drivers loaded use this command.  The install will be a bit bigger:

# yum -y install vmware-open-vm-tools

You are now up and running with VMware tools!

Categories: Linux, VMware Tags: ,

Configure Automount/Autofs on Ubuntu 10.10 Maverick Linux

March 21, 2011 1 comment

Automount/autofs is a Linux daemon which allows for behind the scenes mounting and unmounting of NFS exported directories.  Basically with autofs NFS shares will be automatically mounted when a user or system attempts to access their resources and disconnected after a period of inactivity.  This minimizes the number of active NFS mounts and is generally transparent to users.

First we’ll install autofs and include the NFS client:

# sudo apt-get -y install autofs nfs-common

Now we need to define autofs maps, which are basically configuration files which will tell autofs what types of NFS mounts to define.  With autofs there are two types of maps, direct and indirect.  With direct maps we define a list of filesystems to mount that will not share a common higher level directory on the client.  With indirect maps they share a common directory hierarchy, and a bit less overhead is required.  The advantage with direct maps is that if the user runs a common such as “ls” on the directory structure above the directory will show up, whereas with indirect maps they have to actually access the contents of the directory itself.  This can cause some confusion for users because running “ls” on a directory containing indirect mounts will not show the autofs directories until after the contents within have been accessed.  Indirect maps may also not be available by browsing the directory structure with a GUI file manager, you’ll need to specifically type in the path to get to it.  In this example I’ll show the use of both types of maps.

Master Map

First we need to define a master map, which basically will tell us what indirect/direct maps we want to use and the appropriate config files to read.  Edit the “auto.master” file and append this content:

# sudo vi /etc/auto.master

# directory    map
/server1       /etc/auto.server1
/-             /etc/

The first entry is an indirect map, all of the mounts will be created under /server1 directory and the configuration will be read from “auto.server1”.  With direct maps we use a special character “/-” and will read the config from “”.

Indirect Maps

Time to set up our indirect maps:

# sudo vi /etc/auto.server1

apps        -ro server1:/nfs/apps
files       -fstype=nfs4 server1:/nfs/files

The first column represents the subdirectory to be created under “/server1”.  The second shows the host and NFS export, with apps mounted as read only.  Notice with files that we specify NFSv4, obviously the NFS share must be compatible with NFSv4.  Not specifying an “fstype” should revert autofs to using NFSv3.

Direct Maps

Now we’ll do a direct map:

# sudo vi /etc/

/mnt/data     server1:/nfs/data

Setting up a direct map is basically like configuring an NFS mount in the “/etc/fstab” file.  Include the full directory name from /, and include the host and NFS export names.  The options that we used with the indirect maps above can also be used on direct map mounts if desired.

While I don’t believe it is explicitly required, I have had difficulty accessing NFS shares unless the directory for the direct map is created manually before running autofs:

# sudo mkdir /mnt/data

Now we need to configure automount/autofs to start when the system starts:

# sudo update-rc.d autofs defaults

I am using the “update-rc.d” command, which has very similar functionality to the “chkconfig” utility in the Red Hat world.  It will create links in the various runlevel directories to the daemon’s initialization script.  Autofs is the daemon name, and using the defaults options tells the system to start autofs in runlevel 2-5, and stop in 0, 1, and 6.  Most of the startup runlevels are not used with Debian/Ubuntu, runlevel 2 is the one that matters most to us.

Now restart the autofs daemon to pickup the new configuration:

# sudo service autofs restart

Browse to the newly established autofs mounts or type in the full path where you mounted the NFS export if using indirect maps.  The files on your NFS host should now be available on your client!

Categories: Linux, NFS, Ubuntu Tags: , ,

Configure NFS Server v3 and v4 on Scientific Linux 6 and Red Hat Enterprise Linux (RHEL) 6

March 18, 2011 14 comments

Recently the latest version of Scientific Linux 6 was released. Scientific Linux is a distribution which uses Red Hat Enterprise Linux as its upstream and aims to be compatible with binaries compiled for Red Hat Enterprise. I am really impressed with the quality of this distro and the timeliness with which updates and security fixes are distributed. Thanks to all the developers and testers on the Scientific Linux team! Now let’s move on to configuring an NFS server on RHEL/Scientific Linux.

In my environment I will be using VMware ESXi 4.1 and Ubuntu 10.10 as NFS clients. ESXi 4.1 supports a maximum of NFS v3 so that version will need to remain activated. Fortunately it appears as though out of the box the NFS server on RHEL/Scientific Linux has support for NFS v3 and v4. Ubuntu 10.10 will by default use the NFSv4 protocol.

First make a directory to place the NFS export mount and assign permissions. Also open up write permissions on this directory if you’d like anyone to be able to write to it, be careful with this as there are security implications and anyone will be able to write that mounts the share:

# mkdir /nfs
# chmod a+w /nfs

Now we need to install the NFS server packages. We will include a package named “rpcbind”, which is apparently a newly named/implementation of the “portmap” service. Note that “rpcbind” may not be required to be running if you are going to use NFSv4 only, but it is a dependency to install “nfs-utils” package.

# yum -y install nfs-utils rpcbind

Verify that the required services are configured to start, “rpcbind” and “nfslock” should be on by default anyhow:

# chkconfig nfs on
# chkconfig rpcbind on
# chkconfig nfslock on

Configure Iptables Firewall for NFS

Rather than disabling the firewall it is a good idea to configure NFS to work with iptables. For NFSv3 we need to lock several daemons related to rpcbind/portmap to statically assigned ports.  We will then specify these ports to be made available in the INPUT chain for inbound traffic. Fortunately for NFSv4 this is greatly simplified and in a basic configuration TCP 2049 should be the only inbound port required.

First edit the “/etc/sysconfig/nfs” file and uncomment these directives. You can customize the ports if you wish but I will stick with the defaults:

# vi /etc/sysconfig/nfs


We now need to modify the iptables firewall configuration to allow access to the NFS ports. I will use the “iptables” command and insert the appropriate rules:

# iptables -I INPUT -m multiport -p tcp --dport 111,662,875,892,2049,32803 -j ACCEPT
# iptables -I INPUT -m multiport -p udp --dport 111,662,875,892,2049,32769 -j ACCEPT

Now save the iptables configuration to the config file so it will apply when the system is restarted:

# service iptables save

Now we need to edit “/etc/exports” and add the path to publish in NFS. In this example I will make the NFS export available to clients on the subnet. I will also allow read/write access, specify synchronous writing, and allow root access. Asynchronous writes are supposed to be safe in NFSv3 and would allow for higher performance if you desire. The root access is potentially a security risk but AFAIK it is necessary with VMware ESXi.

# vi /etc/exports


Configure SELinux for NFS Export

Rather than disable SELinux it is a good idea to configure it to allow remote clients to access files that are exported via NFS share.  This is fairly simple and involves setting the SELinux boolean value using the “setsebool” utility.  In this example we’ll use the “read/write” boolean but we can also use “nfs_export_all_ro” to allow NFS exports read-only and “use_nfs_home_dirs” to allow home directories to be exported.

# setsebool -P nfs_export_all_rw 1

Now we will start the NFS services:

# service rpcbind start
# service nfs start
# service nfslock start

If at any point you add or remove directory exports with NFS in the “/etc/exports” file, run “exportfs” to change the export table:

# exportfs -a

Implement TCP Wrappers for Greater Security

TCP Wrappers can allow us greater scrutiny in allowing hosts to access certain listening daemons on the NFS server other than using iptables alone. Keep in mind TCP Wrappers will parse first through “hosts.allow” then “hosts.deny” and the first match will be used to determine access. If there is no match in either file, access will be permitted.

Append a rule with a subnet or domain name appropriate for your environment to restrict allowable access. Domain names are implemented with a preceding period, such as “” without the quotations. The subnet can also be specified like “192.168.10.” if desired instead of including the netmask.

vi /etc/hosts.allow


Append these directives to the “hosts.deny” file to deny access from all other domains or networks:

vi /etc/hosts.deny


And that should just about do it. No restarts should be necessary to apply the TCP Wrappers configuration. I was able to connect with both my Ubuntu NFSv4 and VMware ESXi NFSv3 clients without issues. If you’d like to check activity and see the different NFS versions running simply type:

# nfsstat

Good luck with your new NFS server!


Categories: ESXi, Linux, NFS, VMware Tags: , , ,