The latest infrastructure I've inherited is loaded full with RDMs. My first order of business was to get rid of them, especially since we aren't using them for any reason other than a possible performance improvement.
The steps we've been taking is to get rid of them:
- Convert from a physical RDM to a virtual RDM
- Shut down system
- Take note of SCSI information
- Remove and Delete from Disk
- Re-add the RDM as a virutal RDM instead
- Perform a Storage Migration from one datastore to any other datastore, specifically move the virtual RDM
- Once complete, check the settings on the VM and verify that the hard disk is listed as "virtual disk"
A couple of the pain points we've run into:
Removing and deleting of the physical RDMs did not work as planned. Roughly 10% of the VMs ran into a problem where the pointer files were not properly removed and therefore the RDMs could not be remapped as virtual RDMs. We could still add a hard disk and point it at the pointer files and it properly added back to the VM. We tried rescanning HBAs, we tried different SCSI controllers, etc.
Finally, we figured out that by going into the datastore and manually deleting the pointer files and then vMotioning the VMs to another ESXi host within the cluster, we could then add a new RDM to those previously used RDMs.
In the case of Storage vMotioning the virtual RDMs to a new datastore, if we SvMotioned the RDM to a Storage DRS datastore cluster it only moved the pointer files. If we went through and checked the "Disable Storage DRS" option and selected an individual VMFS datastore, it did the conversion over to VMDK. Adds an extra step, but still gets the job done.
Only a 100+ more RDMs to go... Good times.