VMware vSphere 5.0 Supports Giant VMs: Review
Storage and networking capabilities get a work over with the latest version of VMware’s vSphere
Continued from page 1
The biggest changes in vDS in vSphere 5.0 are the addition of some quite basic network troubleshooting features. I was able to use the newly added network monitor port (a feature of physical switches that coincides with the age of the dinosaurs) to analyse virtual network traffic without needing to route the traffic to an external, physical network.
Adding a monitoring port is an important step in the maturation of the VMware vDS. The old saw that “you can’t manage what you can’t monitor” holds true: The addition of a monitor port is a significant improvement in the vDS.
Even so, it’s clear from my tests that networking is currently an “also-starring” role in vSphere 5.0, playing second fiddle to the first-rate part of creating and managing VMs. Migrating to the vDS and correctly configuring the vDS for production use will require that significant networking expertise be added to the virtualisation team. Pooling physical host NICs and configuring profiles to correctly apply policies to these pooled resources was finicky and easily broken when compared with the process of creating and maintaining VMs.
High availability
One way that VM maintenance was improved in vSphere 5.0 was in the new HA features. Primary and secondary nodes are gone, replaced with a master-slave concept that eliminates planning the location of these nodes. Instead, participating systems elect a master as needed. Also gone is dependency on DNS (Domain Name System) services. A wizard-based interface speeds up HA deployment chores. In this version of HA, I was easily able to use the storage subsystem as a secondary heartbeat monitor that provided a redundant check on host status.
I turned on the HA function in my test cluster and was able to see the host status, such as the number of physical host systems connected to the current HA master. I was also able to see the number of protected and unprotected VMs and which data stores were selected during the set-up process to provide secondary communication between the hosts, as a backup to the management network. Almost all this configuration was performed behind the scenes by vSphere 5.0. I completed the HA setup in a matter of minutes in my test network.
Pulling the plug on various hosts resulted in the failover of VMs within the cluster as was expected.
More management
For the first time, VMware’s DRS (Distributed Resource Scheduler) has been extended to include storage. Implementing Storage DRS was a straightforward process of defining policies for my VMs. Over time, Storage DRS made decisions about the best host for particular VMs and also balanced VM access to storage resources according to service levels I specified in my policies.
For the first time, the vCenter Server is available as a virtual appliance. This first vCenter Server, the management hub for any vSphere domain, is provided as a virtual machine running on SUSE. I used the new vCenter Server virtual appliance throughout my tests. While it shows first-version flaws — for example, networking details such as DNS are actually defined using the command line, not in the Web-based console — the appliance worked well.
vSphere 5.0 is the first version to provide only the ESXi host hypervisor. For some time, VMware has been urging users to adopt the small-footprint ESXi over ESX, with good reason. ESXi takes up only about 100MB on the physical host. It is easy enough to manage the physical host systems from vCenter. ESXi does have a basic network configuration interface. For the most part, however, IT managers will be using the newly enhanced CLI, batch files and vCenter to interact with physical hosts.