Performance Issue with VMware ESXi 4.1 and Physical RDM (Raw Device Mapping)
Recently I was setting up a Windows virtual machine as an NFS server using Services for UNIX and noticed that the performance was very poor. Reading from the NFS share was slow, but write performance was absolutely abysmal. The speed was poor using both Windows 2003 R2 and 2008 R2 as the NFS server. If you do a search on the internet you will notice that this is a common issue encountered using a Windows NFS server and a VMware ESXi client. In my case I encountered a different problem, however.
I was attempting to export NFS using a drive locally connected to an ESXi 4.1 host and configured as a physical RDM. Since the drive was locally connected, it did not present itself within the vSphere Client and configuration was required in the console. An excellent tutorial on how to get this going can be found here.
I initially tried to set up a virtual RDM using “vmkfstools -r” but kept receiving an error that the device was busy. So I ended up creating a physical RDM using “vmkfstools -z” which worked fine at first. With the physical RDM I found that read/write performance was fine locally copying files on the Windows server and also using a Windows SMB share. However, read/write performed using the NFS protocol was extremely slow.
So I deleted the physical RDM mapping that I created (vmkfstools -U) and attempted to start again. This time I created a virtual RDM mapping on a different VMFS partition (the RDM drive is set up as a .vmdk file on an existing volume but takes up no space). This time it was created successful without errors. And I am pleased to say that the performance of a directory exported via NFS is now performing much better. I would say the write speed is at least 12x faster!