Configure Automount/Autofs on Ubuntu 10.10 Maverick Linux

March 21, 2011 1 comment

Automount/autofs is a Linux daemon which allows for behind the scenes mounting and unmounting of NFS exported directories.  Basically with autofs NFS shares will be automatically mounted when a user or system attempts to access their resources and disconnected after a period of inactivity.  This minimizes the number of active NFS mounts and is generally transparent to users.

First we’ll install autofs and include the NFS client:

# sudo apt-get -y install autofs nfs-common

Now we need to define autofs maps, which are basically configuration files which will tell autofs what types of NFS mounts to define.  With autofs there are two types of maps, direct and indirect.  With direct maps we define a list of filesystems to mount that will not share a common higher level directory on the client.  With indirect maps they share a common directory hierarchy, and a bit less overhead is required.  The advantage with direct maps is that if the user runs a common such as “ls” on the directory structure above the directory will show up, whereas with indirect maps they have to actually access the contents of the directory itself.  This can cause some confusion for users because running “ls” on a directory containing indirect mounts will not show the autofs directories until after the contents within have been accessed.  Indirect maps may also not be available by browsing the directory structure with a GUI file manager, you’ll need to specifically type in the path to get to it.  In this example I’ll show the use of both types of maps.

Master Map

First we need to define a master map, which basically will tell us what indirect/direct maps we want to use and the appropriate config files to read.  Edit the “auto.master” file and append this content:

# sudo vi /etc/auto.master

# directory    map
/server1       /etc/auto.server1
/-             /etc/auto.direct

The first entry is an indirect map, all of the mounts will be created under /server1 directory and the configuration will be read from “auto.server1”.  With direct maps we use a special character “/-” and will read the config from “auto.direct”.

Indirect Maps

Time to set up our indirect maps:

# sudo vi /etc/auto.server1

apps        -ro server1:/nfs/apps
files       -fstype=nfs4 server1:/nfs/files

The first column represents the subdirectory to be created under “/server1”.  The second shows the host and NFS export, with apps mounted as read only.  Notice with files that we specify NFSv4, obviously the NFS share must be compatible with NFSv4.  Not specifying an “fstype” should revert autofs to using NFSv3.

Direct Maps

Now we’ll do a direct map:

# sudo vi /etc/auto.direct

/mnt/data     server1:/nfs/data

Setting up a direct map is basically like configuring an NFS mount in the “/etc/fstab” file.  Include the full directory name from /, and include the host and NFS export names.  The options that we used with the indirect maps above can also be used on direct map mounts if desired.

While I don’t believe it is explicitly required, I have had difficulty accessing NFS shares unless the directory for the direct map is created manually before running autofs:

# sudo mkdir /mnt/data

Now we need to configure automount/autofs to start when the system starts:

# sudo update-rc.d autofs defaults

I am using the “update-rc.d” command, which has very similar functionality to the “chkconfig” utility in the Red Hat world.  It will create links in the various runlevel directories to the daemon’s initialization script.  Autofs is the daemon name, and using the defaults options tells the system to start autofs in runlevel 2-5, and stop in 0, 1, and 6.  Most of the startup runlevels are not used with Debian/Ubuntu, runlevel 2 is the one that matters most to us.

Now restart the autofs daemon to pickup the new configuration:

# sudo service autofs restart

Browse to the newly established autofs mounts or type in the full path where you mounted the NFS export if using indirect maps.  The files on your NFS host should now be available on your client!

Advertisements
Categories: Linux, NFS, Ubuntu Tags: , ,

Configure NFS Server v3 and v4 on Scientific Linux 6 and Red Hat Enterprise Linux (RHEL) 6

March 18, 2011 14 comments

Recently the latest version of Scientific Linux 6 was released. Scientific Linux is a distribution which uses Red Hat Enterprise Linux as its upstream and aims to be compatible with binaries compiled for Red Hat Enterprise. I am really impressed with the quality of this distro and the timeliness with which updates and security fixes are distributed. Thanks to all the developers and testers on the Scientific Linux team! Now let’s move on to configuring an NFS server on RHEL/Scientific Linux.

In my environment I will be using VMware ESXi 4.1 and Ubuntu 10.10 as NFS clients. ESXi 4.1 supports a maximum of NFS v3 so that version will need to remain activated. Fortunately it appears as though out of the box the NFS server on RHEL/Scientific Linux has support for NFS v3 and v4. Ubuntu 10.10 will by default use the NFSv4 protocol.

First make a directory to place the NFS export mount and assign permissions. Also open up write permissions on this directory if you’d like anyone to be able to write to it, be careful with this as there are security implications and anyone will be able to write that mounts the share:

# mkdir /nfs
# chmod a+w /nfs

Now we need to install the NFS server packages. We will include a package named “rpcbind”, which is apparently a newly named/implementation of the “portmap” service. Note that “rpcbind” may not be required to be running if you are going to use NFSv4 only, but it is a dependency to install “nfs-utils” package.

# yum -y install nfs-utils rpcbind

Verify that the required services are configured to start, “rpcbind” and “nfslock” should be on by default anyhow:

# chkconfig nfs on
# chkconfig rpcbind on
# chkconfig nfslock on

Configure Iptables Firewall for NFS

Rather than disabling the firewall it is a good idea to configure NFS to work with iptables. For NFSv3 we need to lock several daemons related to rpcbind/portmap to statically assigned ports.  We will then specify these ports to be made available in the INPUT chain for inbound traffic. Fortunately for NFSv4 this is greatly simplified and in a basic configuration TCP 2049 should be the only inbound port required.

First edit the “/etc/sysconfig/nfs” file and uncomment these directives. You can customize the ports if you wish but I will stick with the defaults:

# vi /etc/sysconfig/nfs

RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020

We now need to modify the iptables firewall configuration to allow access to the NFS ports. I will use the “iptables” command and insert the appropriate rules:

# iptables -I INPUT -m multiport -p tcp --dport 111,662,875,892,2049,32803 -j ACCEPT
# iptables -I INPUT -m multiport -p udp --dport 111,662,875,892,2049,32769 -j ACCEPT

Now save the iptables configuration to the config file so it will apply when the system is restarted:

# service iptables save

Now we need to edit “/etc/exports” and add the path to publish in NFS. In this example I will make the NFS export available to clients on the 192.168.10.0 subnet. I will also allow read/write access, specify synchronous writing, and allow root access. Asynchronous writes are supposed to be safe in NFSv3 and would allow for higher performance if you desire. The root access is potentially a security risk but AFAIK it is necessary with VMware ESXi.

# vi /etc/exports

/nfs    192.168.10.0/255.255.255.0(rw,sync,no_root_squash)

Configure SELinux for NFS Export

Rather than disable SELinux it is a good idea to configure it to allow remote clients to access files that are exported via NFS share.  This is fairly simple and involves setting the SELinux boolean value using the “setsebool” utility.  In this example we’ll use the “read/write” boolean but we can also use “nfs_export_all_ro” to allow NFS exports read-only and “use_nfs_home_dirs” to allow home directories to be exported.

# setsebool -P nfs_export_all_rw 1

Now we will start the NFS services:

# service rpcbind start
# service nfs start
# service nfslock start

If at any point you add or remove directory exports with NFS in the “/etc/exports” file, run “exportfs” to change the export table:

# exportfs -a

Implement TCP Wrappers for Greater Security

TCP Wrappers can allow us greater scrutiny in allowing hosts to access certain listening daemons on the NFS server other than using iptables alone. Keep in mind TCP Wrappers will parse first through “hosts.allow” then “hosts.deny” and the first match will be used to determine access. If there is no match in either file, access will be permitted.

Append a rule with a subnet or domain name appropriate for your environment to restrict allowable access. Domain names are implemented with a preceding period, such as “.mydomain.com” without the quotations. The subnet can also be specified like “192.168.10.” if desired instead of including the netmask.

vi /etc/hosts.allow

mountd: 192.168.10.0/255.255.255.0

Append these directives to the “hosts.deny” file to deny access from all other domains or networks:

vi /etc/hosts.deny

portmap:ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL

And that should just about do it. No restarts should be necessary to apply the TCP Wrappers configuration. I was able to connect with both my Ubuntu NFSv4 and VMware ESXi NFSv3 clients without issues. If you’d like to check activity and see the different NFS versions running simply type:

# nfsstat

Good luck with your new NFS server!

References:

http://www.cyberciti.biz/faq/centos-fedora-rhel-iptables-open-nfs-server-ports/

http://www.scientificlinux.org

http://prefetch.net/blog/index.php/2010/11/02/configuring-a-linux-nfs-server-in-a-selinux-managed-environment/

Categories: ESXi, Linux, NFS, VMware Tags: , , ,

Cloned Red Hat/CentOS/Scientific Linux Virtual Machines and “Device eth0 does not seem to be present” Message

February 26, 2011 40 comments

Recently I was preparing some new virtual machines in VMware running Scientific Linux 6.  I encountered some difficulty with the virtual network interface after preparing clones of the machines.  In particular I was unable to get the virtual NIC on the newly cloned machine to be recognized as a valid interface.  Upon further investigation the NIC on the newly cloned machines was being registered as “eth1”.  We can check the currently registered “eth” devices here:

[root@sl6 ~]# ls /sys/class/net
eth1  lo  sit0

As it turns out there is a device manager for the Linux kernel named “udev” which remembers the settings from the NIC of the virtual machine before it was cloned.  I was not familiar with udev because it was not installed in my previous Linux VM install, which were mainly CentOS 5.

Since the hardware address of the network interface changes as part of the clone, the system sees the NIC after the clone as “new” and assigns it to eth1.  The simple way to move the interface back to eth0 is to edit the strings beginning with “SUBSYSTEM” in udev’s network persistence file.

Start off by removing the first “SUBSYSTEM” entry that represents the “old” eth0 interface.  Then edit the second “SUBSYSTEM” entry, changing the “NAME” parameter from “eth1” to “eth0”.  Of course your config may vary from mine.  Keep in mind that the “SUBSYSTEM” line may be wrapped in the text below.

Old file:

[root@sl6 ~]# cat /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x15ad:0x07b0 (vmxnet3) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:87:00:21", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

# PCI device 0x15ad:0x07b0 (vmxnet3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:87:00:25", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

New file:

[root@sl6 ~]# cat /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x15ad:0x07b0 (vmxnet3) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:50:56:87:00:25", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

Now verify that you have a properly configured network config file, the example below is for Red Hat/CentOS/Scientific Linux:

[root@sl6 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
HWADDR="00:50:56:87:00:25"
IPV6INIT="no"
IPV6_AUTOCONF="no"
NM_CONTROLLED="no"
ONBOOT="yes"
IPADDR="192.168.10.125"
NETMASK="255.255.255.0"
NETWORK="192.168.10.0"
BROADCAST="192.168.10.255"

Now reboot the system and the NIC should now be registered as eth0!

Categories: Linux, VMware Tags: ,

Fix for Citrix XenApp Published Apps Disabling Windows Visual Effects

December 5, 2010 4 comments

At work we use Citrix XenApp extensively to publish applications.  One of the minor issues I have encountered is that the Connection Center component of the Citrix Plugin client disables some of the visual effects at log on in Windows including the “Animate Window Minimizing and Maximizing” setting.  I have also noticed that the “Show Windows Contents While Dragging” setting also sometimes gets deactivated.  Having these disabled if you are using the Aero interface is particularly unattractive. 

To stop these from turning off it is necessary to prevent the Connection Center process “concentr.exe” from running when the user logs on.   Keep in mind that disabling the Connection Center will prevent you from accessing the features available in it.  This fix will only affect programs and windows running locally on the computer, apps published from XenApp will still have the visual features disabled.  On my computer the user I normally use is a standard user, so I will disable the execute privileges for the local Users group for the “concentr.exe” file.

I have found that this is the only way to disable the Connection Center from running without the Plugin (in particular the Web version) from detecting that something has been changed and rerunning the initialization.  Renaming the “concentr.exe” file or disabling from running within “msconfig” didn’t work for me.

First navigate to “C:\Program Files (32-bit) or Program Files (x86) (64-bit)\Citrix\ICA Client”.

Right click “concentr.exe” and choose Properties.

Read more…

Categories: Citrix, XenApp Tags: ,

Set Up Windows 2003 R2 NFS Server for VMware ESXi Backups

November 21, 2010 4 comments

Generally my preference is to use Linux as an NFS server.  On the internet you will see frequent reference to the belief that NFS works better on Linux/UNIX.  Recently I decided to try and set up NFS services on Windows to see how well it would perform.  In this tutorial I will set up Services for UNIX 3.5 on a Windows 2003 R2 server and configure it using the User Name Mapping service to allow a VMware ESXi to use it as a datastore for VMs or backups via non-anonymous connections.

First off grab the Windows Services for UNIX (SFU) installation files here.  Extract the files from the download file and run the setup.

Click Next.

Select Custom Installation and click Next.

Read more…

Pages: 1 2 3

Categories: ESXi, NFS, VMware, Windows Tags: , , ,

Simple Way to Install Apache and PHP on Windows

November 10, 2010 Leave a comment

I do a bit of web development, and my preferred web application environment is PHP.  So recently I was reinstalling Apache and PHP on my Windows development computer, and I found a very easy way to get them installed silently with pretty much no hassel.  Traditionally I have done the install by manually extracting PHP and editing the php.ini and httpd.conf to load the appropriate modules.  Thanks to Nicolas his insightful instruction!

First go out and grab the compiled Windows versions of Apache and PHP:

http://httpd.apache.org/download.cgi#apache22

http://www.php.net/downloads.php

This example uses the non-SSL Apache 2.2 and the PHP 5 Win32 installer package.  There are many different install types available for PHP, I avoided the debug and the non-thread safe versions.

Now run silent installs with these switches/parameters at the command prompt for the Apache/PHP packages and restart Apache.  Make sure to include the APACHEDIR parameter when running the PHP install so it can configure Apache with the appropriate PHP module.

C:\temp> msiexec /i apache_2.2.17-win32-x86-no_ssl.msi /passive ALLUSERS=1 SERVERADMIN=admin@localhost SERVERNAME=localhost SERVERDOMAIN=localhost SERVERPORT=80 INSTALLDIR=c:\apache

C:\temp> msiexec /i php-5.2.14-win32-installer.msi /qn APACHEDIR=c:\apache INSTALLDIR=c:\php ADDLOCAL=ext_php_mssql,apache22

C:\temp> net stop "Apache2.2"

C:\temp> net start "Apache2.2"

This also includes SQL Server support for PHP if needed.  Your Windows host should now be ready to serve PHP!

References

http://www.php.net/manual/en/install.windows.apache2.php#92064

Categories: Apache, PHP, Windows Tags: , ,

Create VMware ESXi 4.1 USB Flash Boot Drive for White Boxes with Unsupported Network Cards

November 6, 2010 Leave a comment

One of the convenient features of VMware ESXi 4.1 is the ability to use a USB flash drive as a boot device. While installing ESXi on supported server hardware is relatively easy, installing on unsupported hardware can require some maneuvering to get the installation to complete properly.  In particular, the ESXi 4+ install needs to detect a supported network card to succeed.  You can’t incorporate unsupported network drivers afterward like was possible with ESXi 3/3.5.

To get around these issues I will take advantage of the capability to install VMware ESXi inside a virtual machine.  In this tutorial I will install ESXi 4.1 to a USB drive attached to a virtual machine running inside VMware Workstation 7.1.  After the installation has succeeded I will attached the new USB boot drive to a CentOS Linux virtual machine (this tutorial assumes that you have one available) and add the network card drivers needed by my ESXi white box.

First inside VMware Workstation configure a new virtual machine.

Choose typical and click Next.

In this case I will use the ESXi 4.1 ISO as the installation media that can be obtained for free from VMware.  Select the second option and click Browse.

Browse and select the ESXi ISO and click Open.

Click Next.

For Guest OS selection VMware ESX and choose the ESX Server 4.0 version.

Read more…

Pages: 1 2 3

Categories: ESXi, VMware Tags: ,