VMware’s announcement of vSphere 4, the successor to VMware’s Infrastructure 3 and ESX 3.5, prepares the ground for virtualisation technology to fully enter the data centre, enabling nearly every application to be virtualised.
Positioned as a cloud computing operating system, vSphere 4 will be offered in versions for everything from modest-sized businesses to the largest companies.
During VMware’s live demonstration, the Enterprise Plus version (the top of the range) was used. The numbers are impressive; as many as 12 processor cores, virtual symmetric multiprocessing (vSMP) support for eight-way processors, no license limit for memory per physical server, although 256GB is the limit for all other editions, and new fault tolerance and host profiles.
Server density, availability and deployment all look impressive, and are head and shoulders above the competition, whether open source or from Microsoft.
It’s hardly surprising that storage advances are a big part of the new feature set in vSphere. Guests need virtual disks and data storage. Improvements in storage, via the VMotion component, including enhancements in thin provisioning and tiered storage, will likely give IT managers a significant bit of breathing room as more applications are virtualised.
There was an impressive fault tolerance demonstration given during the announcement presentation; an instance of a BlackBerry Enterprise Server was running on a blade that was unceremoniously removed from its chassis. The application continued to serve up e-mail even as vSphere automatically created a new lockstep guest to maintain the fault-tolerant configuration.
There are all sorts of implementation details that I’ll be looking at when I get vSphere into our San Francisco lab. For example, what are the limits on physical separation of the lockstep systems? But, all in all, vSphere looked impressive.
Much was made of VMware’s participation in building cloud infrastructure that is in use today. It will be interesting to see, both in the lab and in actual installations, how far individual enterprise data centre operators will go toward putting applications in the private cloud. Perhaps an even bigger question is to what extent, and when, will data centre equipment that runs mission-critical applications be included in public/private cloud configurations. Will it make sense to dump the private data centre, and it’s associated operational costs, altogether?
I’m at the RSA security conference as I write these words. From the keynotes to the expo show floor, security in the virtual computing world is at the forefront.
The reason for the attention is that vendors and practitioners see an opportunity to “do security right.” Virtualisation is as close as we’re going to get to an infrastructure do-over.
Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…
Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…
Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…
Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…
Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal
Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…