Categories: CloudCloud Management

Cloud Evolution: Intergalactic Infrastructure Of The Future

Today, most companies are using some form of cloud services.

ccording to Gartner, the Worldwide Public Cloud Services Market is now worth $131bn – when you consider that ten years ago the only clouds people had heard of were the ones in the sky, this is pretty remarkable growth.

So why has cloud adoption enjoyed such phenomenal success? And is it really such a new concept?

It could be argued that the idea of cloud was actually introduced as early as the 1960s by J.C.R Licklider, who voiced his idea of an ‘intergalactic computer network’. Licklider’s idea was that everyone on the globe would eventually be interconnected, accessing applications and data at any site, from anywhere. Today, we can see that we are moving ever-closer to Licklider’s intergalactic future, with the cloud acting as the primary delivery mechanism.

The ‘cloud’ has become something of a catch all phrase for anything that can be delivered via the internet, whether it is infrastructure, data, applications, or a platform. However, at the fundamental root of all IT innovation is the compute power that drives and supports it – so to narrow the scope, I have focused on the evolution of infrastructure, rather than Software-as-a-Service and Platform-as-a-Service.

The Iron Age

To understand how we have come to the version of cloud we have today, it is worth having a look back to life before ‘cloud’ and how the infrastructure environment has developed over the years. It could be argued that the mainframe represents the first iteration of cloud as we know it today.

Widely acknowledged in the 1950s as the ‘future of computing’, large-scale mainframes, colloquially referred to as “big iron”, provided a large scale central infrastructure, shared by various applications and IT services. Like the cloud, businesses could scale resources up and down, depending on their needs.

Aside from maintenance and support, mainframe costs were attributed according to Million Instructions Per Second (MIPS) consumption; the more it was used, the more MIPS were consumed, and the higher the cost. While revolutionary at the time, and still in use to this day, mainframes also have limitations. Mainframes require massive up-front investment, coupled with rapidly depreciating value of physical servers over time, and are expensive to run and maintain. Companies are also limited by the amount of server capacity they have on-site, which means they can struggle to scale capacity according to need.

Yet one of the main reasons that people started to move workloads away from the mainframe and onto client servers was actually one of the reasons people are today moving away from client servers and into the cloud: decentralisation. As mentioned above, mainframes act as a central resource, meaning in the early days they had to be connected directly to computer terminals in order to operate. While this was not a problem when companies only had a handful of computers, the introduction of personal computers in the 1970s changed everything.

Throughout the late 1980s and early 1990s the distributed client/server model became extremely popular, as applications were migrated from mainframes with input/output terminals to networks of desktop computers. This offered newfound convenience and flexibility for businesses, but also added layers of complication in terms of managing this new distributed environment.

The World Wide Web

By the mid-1990s the internet revolution was having a massive impact on culture and the way we consumed technology, and also moving us closer to the cloud that we know and love today. While the distributed on-premise model that had emerged in the 80s had offered huge cost and productivity gains, as IT became more integral to business operations the demand for power increased alongside. This created a new set of problems, as companies had to find money for new servers and space to put them, leading to datacentres and adding further layers of complexity to infrastructure management.

Not only this, but there was a lot of waste due to the variable need for capacity; companies had to pay up front for servers to support their peak levels of capacity, even though this level was only required infrequently. This made capacity planning a mammoth task, and often meant companies needed to make a trade on performance at times of peak traffic.

Hosting companies emerged to fill this gap, promising to manage businesses’ infrastructure for a fixed monthly fee. Hosting opened the door to what we see as cloud today, by helping businesses to get rid of their physical servers and the headaches associated with running them. Yet while hosted services had a lot of benefits, businesses increasingly began to feel locked in to rigid contracts paying for capacity and services that they were not using, and began to crave more flexibility. Then in the early 2000s came virtualisation, which allowed business to run different workloads on different virtual machines (VMs).

By virtualising servers, businesses could spin up new servers without having to take out more datacentre space, helping to address a lot of the issues faced in the on-premise world. However, these machines still needed to be manually managed, and still require a physical server to provide the compute power needed to run them.

VMs, coupled with the internet, enabled a new generation of infrastructure cloud services that we see today. Cloud providers could run multiple workloads from remote locations, helping companies to deploy resources as and when needed, without contracts or large up front investments. By giving businesses access to a network of remote servers hosted on the internet, companies could start to store and manage their data remotely, rather than using a local server. Users could simply sign up to the service, pick an instance size of server, and away they go. Most importantly, they could make changes according to demand, with the option to stop the service whenever they wanted.

Breaking bad habits

Not a lot has changed over the past ten years and this model largely reflects the cloud computing we see today. While users today have greater choice over the instance size of Virtual Machine (VM) they wish to deploy, they still pay for the service based on the level of capacity they provision, whether they use it or not. Unless businesses are prepared to deploy expensive and complicated technology to automatically scale capacity according to usage, the only way to avoid overspending is to have a member of staff manually adjust it, a resource-intensive and time-consuming solution.

As a result, most companies just set a level which should cover their needs, so that some of the time they are over-provisioned, and some of the time they have to sacrifice peak performance as they are under-provisioned; a far from ideal solution. This trend is evidenced in recent research showing that 90% of businesses see over-provisioning as a necessary evil in order to protect performance and ensure they can handle sudden spikes in demand.

This suggests that users are not enjoying the full benefits of the flexibility cloud can provide; instead, they are just picking up their old infrastructure bad habits and moving them into the cloud. However, the introduction of containers could be the answer to these problems. Recent changes in the Linux kernel have enabled a new generation of scalable containers that could make the old Virtual Machine server approach redundant. We have seen the likes of Docker making waves in the PaaS market with its container solution, and now such companies are starting to made waves in the infrastructure world as well.

These containers are enabling cloud infrastructure providers to offer dynamically scalable servers that can be billed on actual usage, rather than the capacity that is provisioned, helping to eliminate issues around over-provisioning. Using Linux containers, businesses no longer have to manually provision capacity. Servers scale up and down automatically, meaning that they are only billed for exactly what they use – like you would be for any other utility. Not only is this cost efficient, but it also takes the mind-boggling complexity out of managing infrastructure; businesses can now spin up a server and let it run with absolutely no need for management.

The Intergalactic Future Is Here!

It looks like the revolutionary ‘intergalactic computer network’ that J.C.R Licklider predicted all those years ago is finally set to become a reality. And it is funny how things come full-circle, as people start to move back to a centralised model, similar to that provided by the early days of mainframe. As our dependence on cloud in all forms increases, the big question is where next?

We believe that just as companies naturally gravitated towards the cloud, leaving hosting companies out in the cold, the same will happen with capacity based vs. usage based billing; logic dictates that containers will win out in the end.

Try our cloud computing quiz!

Duncan Macrae

Duncan MacRae is former editor and now a contributor to TechWeekEurope. He previously edited Computer Business Review's print/digital magazines and CBR Online, as well as Arabian Computer News in the UAE.

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago