Today, most companies are using some form of cloud services.
ccording to Gartner, the Worldwide Public Cloud Services Market is now worth $131bn – when you consider that ten years ago the only clouds people had heard of were the ones in the sky, this is pretty remarkable growth.
So why has cloud adoption enjoyed such phenomenal success? And is it really such a new concept?
It could be argued that the idea of cloud was actually introduced as early as the 1960s by J.C.R Licklider, who voiced his idea of an ‘intergalactic computer network’. Licklider’s idea was that everyone on the globe would eventually be interconnected, accessing applications and data at any site, from anywhere. Today, we can see that we are moving ever-closer to Licklider’s intergalactic future, with the cloud acting as the primary delivery mechanism.
The ‘cloud’ has become something of a catch all phrase for anything that can be delivered via the internet, whether it is infrastructure, data, applications, or a platform. However, at the fundamental root of all IT innovation is the compute power that drives and supports it – so to narrow the scope, I have focused on the evolution of infrastructure, rather than Software-as-a-Service and Platform-as-a-Service.
To understand how we have come to the version of cloud we have today, it is worth having a look back to life before ‘cloud’ and how the infrastructure environment has developed over the years. It could be argued that the mainframe represents the first iteration of cloud as we know it today.
Widely acknowledged in the 1950s as the ‘future of computing’, large-scale mainframes, colloquially referred to as “big iron”, provided a large scale central infrastructure, shared by various applications and IT services. Like the cloud, businesses could scale resources up and down, depending on their needs.
Aside from maintenance and support, mainframe costs were attributed according to Million Instructions Per Second (MIPS) consumption; the more it was used, the more MIPS were consumed, and the higher the cost. While revolutionary at the time, and still in use to this day, mainframes also have limitations. Mainframes require massive up-front investment, coupled with rapidly depreciating value of physical servers over time, and are expensive to run and maintain. Companies are also limited by the amount of server capacity they have on-site, which means they can struggle to scale capacity according to need.
Yet one of the main reasons that people started to move workloads away from the mainframe and onto client servers was actually one of the reasons people are today moving away from client servers and into the cloud: decentralisation. As mentioned above, mainframes act as a central resource, meaning in the early days they had to be connected directly to computer terminals in order to operate. While this was not a problem when companies only had a handful of computers, the introduction of personal computers in the 1970s changed everything.
Throughout the late 1980s and early 1990s the distributed client/server model became extremely popular, as applications were migrated from mainframes with input/output terminals to networks of desktop computers. This offered newfound convenience and flexibility for businesses, but also added layers of complication in terms of managing this new distributed environment.
By the mid-1990s the internet revolution was having a massive impact on culture and the way we consumed technology, and also moving us closer to the cloud that we know and love today. While the distributed on-premise model that had emerged in the 80s had offered huge cost and productivity gains, as IT became more integral to business operations the demand for power increased alongside. This created a new set of problems, as companies had to find money for new servers and space to put them, leading to datacentres and adding further layers of complexity to infrastructure management.
Not only this, but there was a lot of waste due to the variable need for capacity; companies had to pay up front for servers to support their peak levels of capacity, even though this level was only required infrequently. This made capacity planning a mammoth task, and often meant companies needed to make a trade on performance at times of peak traffic.
Hosting companies emerged to fill this gap, promising to manage businesses’ infrastructure for a fixed monthly fee. Hosting opened the door to what we see as cloud today, by helping businesses to get rid of their physical servers and the headaches associated with running them. Yet while hosted services had a lot of benefits, businesses increasingly began to feel locked in to rigid contracts paying for capacity and services that they were not using, and began to crave more flexibility. Then in the early 2000s came virtualisation, which allowed business to run different workloads on different virtual machines (VMs).
VMs, coupled with the internet, enabled a new generation of infrastructure cloud services that we see today. Cloud providers could run multiple workloads from remote locations, helping companies to deploy resources as and when needed, without contracts or large up front investments. By giving businesses access to a network of remote servers hosted on the internet, companies could start to store and manage their data remotely, rather than using a local server. Users could simply sign up to the service, pick an instance size of server, and away they go. Most importantly, they could make changes according to demand, with the option to stop the service whenever they wanted.
Not a lot has changed over the past ten years and this model largely reflects the cloud computing we see today. While users today have greater choice over the instance size of Virtual Machine (VM) they wish to deploy, they still pay for the service based on the level of capacity they provision, whether they use it or not. Unless businesses are prepared to deploy expensive and complicated technology to automatically scale capacity according to usage, the only way to avoid overspending is to have a member of staff manually adjust it, a resource-intensive and time-consuming solution.
As a result, most companies just set a level which should cover their needs, so that some of the time they are over-provisioned, and some of the time they have to sacrifice peak performance as they are under-provisioned; a far from ideal solution. This trend is evidenced in recent research showing that 90% of businesses see over-provisioning as a necessary evil in order to protect performance and ensure they can handle sudden spikes in demand.
This suggests that users are not enjoying the full benefits of the flexibility cloud can provide; instead, they are just picking up their old infrastructure bad habits and moving them into the cloud. However, the introduction of containers could be the answer to these problems. Recent changes in the Linux kernel have enabled a new generation of scalable containers that could make the old Virtual Machine server approach redundant. We have seen the likes of Docker making waves in the PaaS market with its container solution, and now such companies are starting to made waves in the infrastructure world as well.
These containers are enabling cloud infrastructure providers to offer dynamically scalable servers that can be billed on actual usage, rather than the capacity that is provisioned, helping to eliminate issues around over-provisioning. Using Linux containers, businesses no longer have to manually provision capacity. Servers scale up and down automatically, meaning that they are only billed for exactly what they use – like you would be for any other utility. Not only is this cost efficient, but it also takes the mind-boggling complexity out of managing infrastructure; businesses can now spin up a server and let it run with absolutely no need for management.
It looks like the revolutionary ‘intergalactic computer network’ that J.C.R Licklider predicted all those years ago is finally set to become a reality. And it is funny how things come full-circle, as people start to move back to a centralised model, similar to that provided by the early days of mainframe. As our dependence on cloud in all forms increases, the big question is where next?
We believe that just as companies naturally gravitated towards the cloud, leaving hosting companies out in the cold, the same will happen with capacity based vs. usage based billing; logic dictates that containers will win out in the end.
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…