VMware vSphere 4.1: Review

While it takes advanced network expertise to design and tune the policy that runs network I/O controls, the actual implementation of the feature is quite simple. Entering the parameter changes to enable the feature and set the specific physical network adapter shares is just a matter of walking through a couple of configuration screens that are easily accessed from the vSphere client. I was able to assign low, medium, normal, high or a custom setting that designated the number of network shares—a policy designation that represents the relative importance of virtual machines that are using the same shared resources—that would be allocated to VM, management and fault-tolerant traffic flows.

Must Read: VMware Is Aiming For The Cloud

Storage I/O controls were equally easy to configure once the policy decisions and physical prerequisites were met. In my relatively modest test environment it was no trouble to run storage I/O controls on a single vCenter Server. I tested this feature on an iSCSI-connected storage array. It also works on Fibre Channel-connected storage, but not NFS (Network File System) or Raw Device Mapping storage. There are other requirements and restrictions that make this feature one more suited for evaluation for strategic implementation, including tiered storage system certifications.

Virtual machines can be limited based on IOPS (I/O operations per second) or megabytes per second. In either case, I used storage I/O controls to limit some virtual machines in order to give others priority. I found the large number of considerations—for example, each virtual disk associated with each VM must be placed under control for the limit to be enforced—meant that I spent a great deal of time figuring out policies to get a modest amount of benefit when my systems were actually running.

Memory

VMware included a handy memory innovation in vSphere 4.1 called “memory compression.” IT managers would do well to become familiar with the feature, as it is enabled by default. In my tests I saw improvements in virtual machine performance after I artificially constrained the amount of physical host memory. As my VM systems started to access memory to handle test workloads, my ESX 4.1 system started to compress virtual memory pages and store them to a compressed memory cache.

Since accessing this memory is significantly faster than swapping memory pages to disk, the virtual machines ran much faster than when this feature was disabled and the same workloads were started. System and application managers will likely need to work together to work out the best formula for utilising memory compression. I made extensive use of the memory performance metrics to see what was happening to my test systems as I constrained the amount of host memory. IT managers should expect to devote at least several weeks of expert analysis to determining the most effective memory compression configuration for each workload.

Housekeeping

In addition to the changes made in handling system resources, VMware did some housekeeping in the incremental release of vSphere. The vSphere client is still available in the vCenter 4.1 installation bits but is no longer included in the ESX and ESXi code. Instead, users are directed to a VMware website to get the management client. There were some minor changes made to various interface screens, but nothing that will puzzle an experienced IT administrator.

Page: 1 2

Cameron Sturdevant eWEEK USA 2012. Ziff Davis Enterprise Inc. All Rights Reserved.

Recent Posts

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

1 day ago

Tech Minister Admits UK Social Media Ban For Under-16s “On The Table”

Following Australia? Technology secretary Peter Kyle says possible ban on social media for under-16s in…

2 days ago

Northvolt Appoints Restructuring Expert For Main Battery Plant

Restructuring expert appointed to oversea Northvolt's main facility in northern Sweden, amid financial worries

2 days ago