When people grow up, they generally become more staid. Technologies do the same.
Cloud computing may never have been rock’n’roll, but in its early days it was scary and transgressive. This is no longer the case. And the strongest signs of the change could be summed up in two words. Words which only get used about established, sensible technologies. Those words are ‘measurement’ and ‘benchmarks’.
It’s time for cloud providers to start answering questions about how to measure the chunks of service they provide. And how to compare the performance those chunks deliver.
So performance may not get a look in. After all, cloud provider A can sell you as much performance as cloud provider B just by adding some more resources and upping the price. That’s kind of the point of cloud.
At this stage, price won’t be the prime consideration, but as the market matures, the choice will increasingly be on price and performance. And to make that choice, users will need a sure way to compare the performance of different clouds, and to compare the size of the processing unit they get for a given price.
Movement on the first issue has begun, with SPEC (the Standard Performance Evaluation Corporation) announcing the launch of a group, OSGCloud, to define ways to measure cloud performance. SPEC deals with benchmarks – standardised loads, which can be run on all systems, to compare their performance. The cloud group will extend this to shared virtual services.
“Cloud computing is on the rise and represents a major shift in how servers are used and how their performance is measured,” said Rema Hariharan, chair of OSGCloud. “We want to assemble the best minds to define this space, create workloads, augment existing SPEC benchmarks, and develop new cloud-based benchmarks.”
This effort won’t be done overnight, but it is definitely off the starting blocks. “The OSGCloud group is well beyond the theoretical stage and actively working on the benchmark,” group member Bob Cramblitt, of Cramblitt and Company, communications manager for SPEC, told TechWeekEurope. “They anticipate having something ready in about one year – getting the right datasets, establishing the testing parameters, ensuring a level playing field, and creating the metrics and reporting formats takes time and this is a voluntary group, all of whom have day jobs.”
The group is going to start at the IaaS (Infrastructure-as-a-Service) level, where Amazon and Microsoft exist, but may move on to PaaS (Platform as a service) and SaaS (Ssoftware as a service).
That will eventually answer the question of measuring the performance of a piece of cloud that you buy from a cloud provider. But it doesn’t answer the immediate problem of how you compare the size of the piece of cloud you have bought.
We’ve now reached a stage where users can buy cloud services from different providers, and need a clear way to compare the size of the piece of cloud they get from each one.
Comparing RAM and storage is fairly easy (though a thorough buyer will want to know access times and reliability). But how about comparing CPU?
Amazon Web Services (AWS) uses its own proprietary measurement – the Elastic Compute Unit (ECU), which has an equivalent CPU capacity of a 1.0 – 1.2 GHz 2007 Opteron or 2007 Xeon processor.
Rackspace uses a different non-standard unit, selling processing power in “compute cycles”, which are “roughly equivalent to running a server with a 2.8 GHz modern processor for the same period of time”.
Other cloud providers use their own measurements. For instance Lunacloud uses the “vCPU” which is equivalent to a 1.5 Ghz 2010 Xeon processor.
Amazon and Lunacloud are being quite helpful in mentioning what type of processor they are offering up a virtual equivalent to, but their units won’t be objective. One vendor’s Xeon machine using the same processor may perform differently to another’s – that’s the reason we have SPEC benchmarks for real physical servers.
“Developing common standards in data security, sovereignty and privacy is rightly occupying the focus of many in the cloud industry,” says Antonio Miguel Ferreira, CEO of Lunacloud. “However, we need complete transparency so that end-users can easily compare every part of a cloud providers’ market offer with their competitors.”
I think we need a two-fold strategy here. First, let’s get cloud providers to be specific about what level of CPU their cloud service should approximate to, so we can get a rough idea of how much processor we get for our money.
Second, let’s get the SPEC benchmarks together so we can check whether those promises are true – and have a Cloud-SPECmark unit to objectively compare the performance of cloud services.
Till the benchmarks are ready, you can measure your own cloud performance – with our quiz!
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…
View Comments
Very good summary on the need for the industry to agree on benchmarks, to make comparisons easier for customers.
The cloud industry is getting more mature and going through the same process hardware vendors went through maybe 20 years ago, when SPEC benchmarks became more commonly used.
The higher you go on the cloud stack (IaaS, PaaS, SaaS) the more difficult it will be to compare though. But for IaaS it is definitely possible.
Price, Performance, SLA, Security/Trust are key parameters in choosing the right cloud provider, but Price and SLAs are the only objective parameters these days.