Home > VMware, vSphere > Performance Issue with VMware ESXi 4.1 and Physical RDM (Raw Device Mapping)

Performance Issue with VMware ESXi 4.1 and Physical RDM (Raw Device Mapping)

Recently I was setting up a Windows virtual machine as an NFS server using Services for UNIX and noticed that the performance was very poor.  Reading from the NFS share was slow, but write performance was absolutely abysmal.  The speed was poor using both Windows 2003 R2 and 2008 R2 as the NFS server.  If you do a search on the internet you will notice that this is a common issue encountered using a Windows NFS server and a VMware ESXi client.  In my case I encountered a different problem, however.

I was attempting to export NFS using a drive locally connected to an ESXi 4.1 host and configured as a physical RDM.  Since the drive was locally connected, it did not present itself within the vSphere Client and configuration was required in the console.  An excellent tutorial on how to get this going can be found here.

I initially tried to set up a virtual RDM using “vmkfstools -r” but kept receiving an error that the device was busy.  So I ended up creating a physical RDM using “vmkfstools -z” which worked fine at first.  With the physical RDM I found that read/write performance was fine locally copying files on the Windows server and also using a Windows SMB share.  However, read/write performed using the NFS protocol was extremely slow.

So I deleted the physical RDM mapping that I created (vmkfstools -U) and attempted to start again.  This time I created a virtual RDM mapping on a different VMFS partition (the RDM drive is set up as a .vmdk file on an existing volume but takes up no space).  This time it was created successful without errors.  And I am pleased to say that the performance of a directory exported via NFS is now performing much better.  I would say the write speed is at least 12x faster!

Categories: VMware, vSphere Tags: ,
  1. heartshare
    May 20, 2011 at 5:10 pm

    the RDM drive is set up as a .vmdk file on an existing volume but takes up no space

    could you share the steps how to do it ? thanks

    • May 20, 2011 at 6:39 pm

      Hi heartshare,

      The 2 RDM associated vmdk files shouldn’t take up very much space (essentially zero). But if you list it with “ls -l” or stat the “disk1-rdmp.vmdk” file should display the size of the physical drive you are associating it with. If you “ls -l” and the byte count says 0 you could have a problem with the link between the rdm and the physical drive, so you may have an problem in your vmkfstools command.


    • May 22, 2011 at 3:46 pm

      Hi heartshare,

      Here is a the link that I found most helpful. Keep in mind this is for locally attached SATA storage:



  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: