Have an ESXi host which is a standalone box? No VMware Update Manager? No vMA?
Well, they still require patches. Luckily enough, you can still use the stripped down version of the console which is included in ESXi to update it.
Start by heading out to the VMware Patches portal http://www.vmware.com/patchmgr/download.portal and download the neccessary patches for the server that needs patched.
Upload the patch zip file to a datastore that the server can talk to via either SCP or the datastore browser
Next, make sure the SSH service has been started.
To do this while in the vSphere Client, click on the desired host, and click on the "Configuration" tab followed by the "Security Profile" link in the "Software" box, then click on "Properties" in the top right side.
Highlight "SSH" and then click "Options", after the SSH Options screen pops up, click on "Start", then click "OK" twice to get back to the Configuration tab.
After getting connected to the ESXi host, run the command: esxcli software vib install -d *full path to uploaded zip*
Example: esxcli software vib install -d /vmfs/volumes/VMO-01 Datastore/Temp/update-from-esxi5.0-5.0_update01.zip
If ready to reboot, type in "reboot" and the system will reboot. Just remember to check to make sure that the SSH service has been stopped when it boots back up.
One error that I ran into, if you don't give the full path to the zip file containing the update, the patching will fail with a "MetadataDownloadError" reading:
Could not download from depot at zip:/var/log/vmware/*update name*.zip?index.xml, skipping (('zip:/var/log/vmware/*update name*.zip?index.xml', '', "Error extracting index.xml from :/var/log/vmware/*update name*.zip: [Errno 2] No such file or directory: '/var/log/vmware/*update name*.zip?index.xml'"))
url = zip:/var/log/vmware/*update name*.zip?index.xml
Please refer to the log file for more details.
Once I put in the full path, it worked just fine.
Instead of the planned upgrade we were going to perform, we decided to start from scratch and do a full reinstall of our environment. So that entailed registering the Dell EqualLogic HIT Kit to a new VirtualCenter.
Start off by opening up the console on the VM and logging in. (Default Username: root Default Password: eql) Once logged in, select Option 8 to unregister it from the old vCenter.
From there, select Option 4 to configure vCenter. Enter in the credentials for the new vCenter (IP, admin account, password, EQL HIT Kit Appliance IP, and an admin email addres), confirm the credentials and the appliance should connect to vCenter and be successful.
Once back to the main screen, select Option 7 to register the appliance with vCenter and then reboot the appliance
After the appliance is back at the login prompt, check back to the vCenter "Solutions and Applications" section and make sure that the EqualLogic utilities are there. For good measure, login to one of the utilities and ensure the configuration is correct.
So I've been playing around with OpenFiler in the dev environment I've cobbled together all made up of systems all nearing or already out of warranty support.
What I wanted to test was whether or not there was a difference between disk performance for an ESXi 5 VM with disk drives having a volume assigned to their own iSCSI target or could I map all the volumes to one iSCSI target and call it good.
In this situation, I have a Dell PowerEdge 2950 with OpenFiler installed along with 6 SATA drives in a hardware controlled RAID5. (PERC6i, I think)
Summary of the settings:
RAID 5 - 6 SATA 7.2k drives
2 iSCSI connections from Broadcom NetXtreme 5708 NIC
2 iSCSI connections from Intel 82576 NIC
Openfiler installed (latest distro)
Then I have a Dell PowerEdge 2950 with ESXi 5 installed.
Summary of the settings:
3 iSCSI connections from Intel 82576 NIC
ESXi 5 installed
So I added the Volume Group, added 2 separate volumes each worth 500GB. Then, after starting the iSCSI Target Service, I added a single iSCSI target and mapped both volumes.
Now I get into the ESXi host and add the IPs from the OpenFiler connection to the iSCSI software initiator and rescan the HBA. I configured all 3 iSCSI connections on the host in a Round Robin connection to the devices.
Part of the fun I found was that the switch I've obtained (an out of warranty 10/100/1000 24 port HP) does not happen to support jumbo frames. So all of this is ran with the MTU set at 1500.
So I take a VM already running on local storage of the ESXi host, add 1 2GB drive from each of the newly added volumes.
I opened up IOMeter, configured 2 workers (one for each volume) to run all the tests for 5 minutes and let them rip.
Surprisingly, I found that it performed at 1158.879 IOPS at 14.87 MBps total. Volume 1 was 580.365 IOPS and 7.437 MBps and Volume 2 was at 578.514 IOPS and 7.432 MBps.
Now I removed the drives from the VM, unmapped both volumes from OpenFiler and deleted the iSCSI Target. I created 2 new iSCSI targets and assigned one volume to each target and rescanned the HBAs on the ESXi host. Then I added 2GB drives (1 from each volume) to the VM again and ran the same test.
Another surprise, it was slower. The separate targets performed together for 1101.91 IOPS and 14.156 MBps. Volume 1 was 547.439 IOPS and 7.039 MBps and Volume 2 was at 554.471 IOPS and 7.118 MBps.
While the difference isn't drastic, it's enough to not bother configuring separate iSCSI targets when dealing with OpenFiler environments.
With so many things being upgraded, it's hard to keep track of the actual benefits to each one. The VMFS 5 upgrade has some pretty substantial upgrades, including: a unified block size at 1MB (no more volume size limitations due to block size), the extent volume size limit has been increased and can theoretically go near 60TB on a single extent volume, smaller sub-blocks (down to 8KB from the previous 64KB), pass-through RDMs can approach the 60TB limit, etc.
There are still some limitations though, unfortunately the VMDK size limit remains at 2TB, virtual RDMs also remain with a 2TB size limit, and the LUN limit remains at 256.
To start the upgrade, go to the datastore which needs upgrading and click on the "Upgrade to VMFS-5" link:
The system will check to verify that all hosts whom have connection to the selected datastore can access the VMFS-5 version:
After clicking "OK" to start the upgrade, the datastore will upgrade and then all the hosts connected to it will rescan for VMFS volumes:
Clicking back on the datastore, the version will have been updated. In this case it was to VFMS 5.54.
With the new features of VMFS-5, I think increased the volume to the 5TB limit which was set on the SAN. Click on the "Properties" of the Datastore, and then click on the "Increase" button:
Select the same datastore to extend the extent, click "Next", verify the current layout, click "Next"
Select the amount of space to expand the datastore to, select "Next", then verify the information and click "Finish"
The tasks will computer and expand the datastore, then have all the hosts rescan for VMFS volumes
After the tasks complete, it's all done and ready to go.
Upgrading from VMFS 3 to 5 was quite smooth, the VMs kept running, and nothing was noticed. The same can be said for the expansion, everything went smooth. One thing I would recommend is creating a VMFS 5 extent volume and Storage vMotioning from the VMFS 3 extent to the VMFS 5 extent.