VMware vSphere 5.0 Supports Giant VMs: Review

VMware vSphere 5.0 continues to set the pace for data centre x86 server virtualisation and remains the clear leader for IT managers who need a virtual infrastructure that can handle production workloads while containing operational costs.

The vSphere 5.0 ship date is imminent but as yet undisclosed; however, eWEEK Labs obtained an advance copy.

In assessing the technology, IT managers should look for significant changes to functions such as HA (high availability), VMware’s DRS (Distributed Resource Scheduler) and new network-monitoring tools, and a complete reliance on the ESXi hypervisor. Despite changes to the VMware licensing model, the bottom line remains the same: organisations will pay a premium to use the enterprise-class components that make up vSphere 5.0.

IT managers who are already using vSphere 4.1 or 4.0 will quickly come up to speed on this latest version. For experienced users, changes that bolster existing features — including enhancements in the CLI (command-line interface), HA and VMware’s exclusive use of ESXi (ESX hosts are still fully supported in vSphere 5.0) — while powerful, are not radically different from previous versions. Where they are significantly different, as in HA, my tests show that the change usually reduces the amount of training needed to use the feature.

One area that will need some new thinking is the sizing and outfitting of physical hosts. The new configuration maximums allow for the creation of virtual machines with up to 1TB of memory and up to 32 virtual CPUs. I can’t say much about how these giant systems would work. Our modest-sized workloads running on medium-to-slow speed iSCSI storage worked well. I will be following up with enterprise managers who are using the giant-sized VMs to see how well these systems perform in the field. I’ll be paying special attention to the physical machine configurations needed to run these much larger VMs as well.

How we tested

I used four Intel-based servers, Cisco 3560G switches and an OpenFiler iSCSI storage management system to host a sneak preview copy of VMware vSphere 5.0 for most of August. The two stalwart HP servers, a DL360 G6 and a DL380 G6, were equipped with Nehalem-class Intel Xeon processors. The other systems, a Lenovo RD210 and an Acer AR380 F1, had more advanced Intel processors and more memory, 12GB and 24GB of RAM, respectively.

I started the first round of tests by doing an in-place upgrade of the two HP systems, going from vSphere ESX 4.1 to ESXi 5.0. Migrating the systems was a piece of cake. Where I would have benefited from more planning was in migrating VMware’s VMFS (Virtual Machine File System) storage and networking.

The VMFS goes from 3.0 to 5.0 in this release of vSphere. The VMFS 5.0 does away with variable block size formatting by using only 1MB blocks. vSphere 5.0 can use either file system, and further testing and field experience will be needed to make a recommendation about the best approach to use in mixed environments. My tests showed that it is possible to upgrade in place although the process took several steps and a fair amount of reading and planning to get our VMFS 3.0 data stores correctly migrated to VMFS 5.0. The virtualisation team will definitely need to involve the storage team in this planning process to ensure a smooth transition.

In a similar, but less successful vein, I also implemented the latest version of the VMware vDS (vSphere Distributed Switch). In the end, I discarded all the existing networking, and implemented the networking from scratch. While it is possible to migrate from vNetwork Standard Switches (a virtual switch created on a single, physical vSphere host) to a vDS, this process takes considerable planning. Further, IT managers will have the greatest chance of success if they start with hosts configured with similar numbers of NICs (network interface cards) and similarly configured standalone Standard Switches.

Note that vDS was introduced in vSphere 4.0. Those already using a vDS will find that it is relatively simple to upgrade the switch. The journey will be considerably more involved for organisations migrating from Standard Switches. I performed various migrations, most where the VMs were shut down and my small number of hosts joined to the vDS, one at a time. It is also possible to use host profiles to transition physical host systems onto the vDS.

Continued on page 2

Page: 1 2

Cameron Sturdevant eWEEK USA 2012. Ziff Davis Enterprise Inc. All Rights Reserved.

Recent Posts

Apple, Google Mobile Ecosystems Should Be Investigated, CMA Told

CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation

15 hours ago

Australia Rejects Elon Musk Claim About Social Media Ban For Under-16s

Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…

17 hours ago

Northvolt Files For Bankruptcy Protection In US

Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…

19 hours ago

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

1 day ago

Former Policy Boss At X, Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

2 days ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

2 days ago