Home > ESXi, Linux, NFS, VMware > Configure NFS Server v3 and v4 on Scientific Linux 6 and Red Hat Enterprise Linux (RHEL) 6

Configure NFS Server v3 and v4 on Scientific Linux 6 and Red Hat Enterprise Linux (RHEL) 6

Recently the latest version of Scientific Linux 6 was released. Scientific Linux is a distribution which uses Red Hat Enterprise Linux as its upstream and aims to be compatible with binaries compiled for Red Hat Enterprise. I am really impressed with the quality of this distro and the timeliness with which updates and security fixes are distributed. Thanks to all the developers and testers on the Scientific Linux team! Now let’s move on to configuring an NFS server on RHEL/Scientific Linux.

In my environment I will be using VMware ESXi 4.1 and Ubuntu 10.10 as NFS clients. ESXi 4.1 supports a maximum of NFS v3 so that version will need to remain activated. Fortunately it appears as though out of the box the NFS server on RHEL/Scientific Linux has support for NFS v3 and v4. Ubuntu 10.10 will by default use the NFSv4 protocol.

First make a directory to place the NFS export mount and assign permissions. Also open up write permissions on this directory if you’d like anyone to be able to write to it, be careful with this as there are security implications and anyone will be able to write that mounts the share:

# mkdir /nfs
# chmod a+w /nfs

Now we need to install the NFS server packages. We will include a package named “rpcbind”, which is apparently a newly named/implementation of the “portmap” service. Note that “rpcbind” may not be required to be running if you are going to use NFSv4 only, but it is a dependency to install “nfs-utils” package.

# yum -y install nfs-utils rpcbind

Verify that the required services are configured to start, “rpcbind” and “nfslock” should be on by default anyhow:

# chkconfig nfs on
# chkconfig rpcbind on
# chkconfig nfslock on

Configure Iptables Firewall for NFS

Rather than disabling the firewall it is a good idea to configure NFS to work with iptables. For NFSv3 we need to lock several daemons related to rpcbind/portmap to statically assigned ports.  We will then specify these ports to be made available in the INPUT chain for inbound traffic. Fortunately for NFSv4 this is greatly simplified and in a basic configuration TCP 2049 should be the only inbound port required.

First edit the “/etc/sysconfig/nfs” file and uncomment these directives. You can customize the ports if you wish but I will stick with the defaults:

# vi /etc/sysconfig/nfs


We now need to modify the iptables firewall configuration to allow access to the NFS ports. I will use the “iptables” command and insert the appropriate rules:

# iptables -I INPUT -m multiport -p tcp --dport 111,662,875,892,2049,32803 -j ACCEPT
# iptables -I INPUT -m multiport -p udp --dport 111,662,875,892,2049,32769 -j ACCEPT

Now save the iptables configuration to the config file so it will apply when the system is restarted:

# service iptables save

Now we need to edit “/etc/exports” and add the path to publish in NFS. In this example I will make the NFS export available to clients on the subnet. I will also allow read/write access, specify synchronous writing, and allow root access. Asynchronous writes are supposed to be safe in NFSv3 and would allow for higher performance if you desire. The root access is potentially a security risk but AFAIK it is necessary with VMware ESXi.

# vi /etc/exports


Configure SELinux for NFS Export

Rather than disable SELinux it is a good idea to configure it to allow remote clients to access files that are exported via NFS share.  This is fairly simple and involves setting the SELinux boolean value using the “setsebool” utility.  In this example we’ll use the “read/write” boolean but we can also use “nfs_export_all_ro” to allow NFS exports read-only and “use_nfs_home_dirs” to allow home directories to be exported.

# setsebool -P nfs_export_all_rw 1

Now we will start the NFS services:

# service rpcbind start
# service nfs start
# service nfslock start

If at any point you add or remove directory exports with NFS in the “/etc/exports” file, run “exportfs” to change the export table:

# exportfs -a

Implement TCP Wrappers for Greater Security

TCP Wrappers can allow us greater scrutiny in allowing hosts to access certain listening daemons on the NFS server other than using iptables alone. Keep in mind TCP Wrappers will parse first through “hosts.allow” then “hosts.deny” and the first match will be used to determine access. If there is no match in either file, access will be permitted.

Append a rule with a subnet or domain name appropriate for your environment to restrict allowable access. Domain names are implemented with a preceding period, such as “.mydomain.com” without the quotations. The subnet can also be specified like “192.168.10.” if desired instead of including the netmask.

vi /etc/hosts.allow


Append these directives to the “hosts.deny” file to deny access from all other domains or networks:

vi /etc/hosts.deny


And that should just about do it. No restarts should be necessary to apply the TCP Wrappers configuration. I was able to connect with both my Ubuntu NFSv4 and VMware ESXi NFSv3 clients without issues. If you’d like to check activity and see the different NFS versions running simply type:

# nfsstat

Good luck with your new NFS server!





Categories: ESXi, Linux, NFS, VMware Tags: , , ,
  1. March 25, 2011 at 4:57 am

    Awesome post! It really helped me a lot! Thanks for sharing it.

  2. Mark
    July 19, 2011 at 7:46 pm

    I get confused on the rpcbind server because RHEL 6 claims it isn’t required, but NFS will nto start without rpcbind started even if only using nfs4.

    Any ideas?

    • July 20, 2011 at 4:54 am

      Hi Mark,

      I know what you are saying, I’ve heard the same thing about the rpcbind/portmapper not being necessary with NFS4. Unfortunately I haven’t pursued how it can be disable because I’ve always needed v3 on my NFS servers as well.


  3. Greg Jefferis
    July 30, 2011 at 6:17 pm

    Many thanks for the post. Very helpful. Greg.

  4. Constantine
    August 27, 2011 at 3:51 pm

    Thanks! This is really useful howto.

  5. teancum6366
    November 25, 2011 at 3:55 pm

    Excellent howto. Worked perfectly on Fedora 14. However, when I tried to set up on Fedora 16, the /etc/sysconfig/nfs was different enough that I’m not sure how to configure. Please provide updates.

    Also, with the switch to systemd, the services have changes, but I was able to map that pretty well. Will be grateful for any updates you are willing to publish.

  6. Jlsalto27
    March 11, 2012 at 8:33 am

    Thanks!!! I could do it easily with your help.

  7. LinuxWander
    April 22, 2012 at 3:48 am

    Did anyone try to set the context so that it is not defaulted to nfsd_t? Redhat indicated that you could specify -o context=”system_u:object_r:your_type_t:s0″. I tried I and it does not work.

    • Matthew Tanksley
      June 1, 2012 at 2:01 am

      @LinuxWande I had the same problem but it was resolved by specifying nfsvers=3. Perhaps there’s an issue with specifying a context with NFS4?

  8. ronb7575
    June 5, 2012 at 1:07 pm

    On my RHEL 6 server, I had to add:


    to /etc/hosts.allow, otherwise I was getting the following error when trying to start the nfs service.

    Starting NFS daemon: rpc.nfsd: writing fd to kernel failed: errno 13 (Permission denied)
    rpc.nfsd: unable to set any sockets for nfsd

  9. Doug
    September 14, 2012 at 8:58 pm

    THANK YOU!!!!! Great write up and very easy to follow.

  10. Milan
    November 21, 2012 at 8:11 am

    Thank you very much! You save my day :)

  1. September 22, 2011 at 2:59 pm
  2. March 8, 2012 at 12:41 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: