Categories: DevOpsProjects

Survival Of The Fittest: Are You Going The Way Of DevOps Or The Dodo?

The evolution of the digital service economy has created major benefits for consumers and businesses alike.

The ability to do almost anything online; from ordering your groceries to opening a bank account, provides a major advantage over the restrictions of bricks and mortar services. On the flip-side, it’s equally become a bane for enterprise IT departments. Where application developers could previously work on structured development cycles, they have long since been forced towards the adoption of agile methods in order to speed up the development process and keep pace with customers and competitors.

Crash test customers

While this approach can provide a competitive edge by enabling the introduction of smaller updates, which in turn allow for more frequent software releases, the reduction in test cycles also means the IT department has less time to ensure new features perform as expected before they’re launched. In essence, this means that companies are often using customers as crash test dummies; launching services without having first ironed out all the kinks and waiting to see what happens.

The early days of Facebook are a great example of this in practice, as it relied on feedback from users rather than using established testing processes, but there can be major ramifications of this approach for services designed for revenue generation. It’s certainly easy to see why it won’t be long before the operations team is inundated with complaints from customers and end-users about site crashes and service outages. There are also a number of broader issues adding fuel to this fire.

Globally diverse teams – Even though many large enterprises are moving away from outsourcing their application development to bringing it back in-house to centralised teams, they’re often still spread around the world, making collaboration on design and problem resolution difficult.

Shortcuts in coding – More third-party code is being integrated into applications in an attempt to shorten development cycles; but this increases complexity and makes it more difficult to troubleshoot the source of performance problems

IT department siloes – There is often very little collaboration between development and operations teams, which in most cases continue to work apart from one another, making it difficult to ensure that everyone is working towards the same goals

War room scenarios – Divided IT teams are wasting hours in “war rooms” trying to establish whose fault the problem is and whose responsibility it is to fix it, rather than actually getting down to work and bringing the service back up quickly.

24/7 trading – the advent of e-commerce means that businesses are now essentially open 24 hours a day, every day of the year; placing enormous pressure on the IT teams responsible for ‘keeping the doors open’ round-the-clock

This all adds up to a seemingly insurmountable challenge for IT department heads. How do you maintain IT service quality when your teams are under more pressure than ever to turn things around quickly? How do you prevent your business from going the way of the dodo when your core IT services teams are unable to collaborate effectively? Finding a solution to this stalemate demands that IT teams recognise the link that exists between the quality of IT services for end-users and business profitability. This can best be achieved through the adoption of a DevOps culture; where IT teams are encouraged to work together and share responsibility for their application’s end-users. Of course, a total cultural evolution is easier said than done, but there are a number of steps that can ease the transition.

Outlining shared objectives

As a first step, every team within the IT department needs to be working towards a shared business goal, rather than focussing on their own piece of the puzzle. This means that IT leaders need to clearly define success in a way that is applicable to both operations and development teams. Since it’s the functionality that an application enables, and its continued performance from end-users’ perspective that has the biggest impact on the business, this is an obvious place to start.

Functionality and performance need to be integrated as key requirements at all stages of the application lifecycle; from development, to testing and final deployment. As such, continuous testing and measurement of key metrics, such as application response time, should be embedded into application processes. Next-generation Application Performance Management (APM) solutions are central to achieving this, but the tools and the data they generate must be shared across teams to ensure they’re all of one mind.

Everyone’s an expert

As we’ve established, a large part of the problem that IT directors face is that their teams work in siloes, with little collaboration. A major step in the adoption of a DevOps culture is to tear down these walls and get the teams sharing their expertise, data and tools in order to bring better quality services to market faster. The wealth of knowledge that testing and operations teams have on application functionality, performance, scalability and deployment in the real world can be a major asset for development teams when building new features with performance in mind.

Testing and operations teams can also benefit from developers’ insights in order to lower the workload for each new release, by creating automated tools for test and deployment environments. Working more closely in this way allows IT teams to better identify how to measure the functionality, performance and scalability of new features. It also makes the process for troubleshooting any issues and resolving the root cause before the application enters production much quicker and easier, without the need for “war room” scenarios.

An automated future

It is essential for development, testing and operations teams to work together to find ways of moving applications from development, through testing and into production as quickly as possible; without sacrificing on quality. DevOps promises to facilitate this by enabling more releases in a shorter timeframe, allowing businesses to react to problems or changes in the market faster.

However, every manual task is taking time away from achieving this goal, so there is also a high level of automation required. Automated performance and scalability tests, deployment processes and the ability to automatically scale up or down can significantly reduce manual workloads. It also enables the automatic measurement, communication and sharing of the key performance metrics defined by each team with everybody that relies on them to make decisions.

Of course, there is a long road ahead to full scale adoption of a DevOps culture. However, these three basic first steps can set IT departments on the path for long-term business success as the digital era gathers pace.

How much do you know about the world’s most spectacular tech failures? Take our quiz!

Duncan Macrae

Duncan MacRae is former editor and now a contributor to TechWeekEurope. He previously edited Computer Business Review's print/digital magazines and CBR Online, as well as Arabian Computer News in the UAE.

Recent Posts

X’s Community Notes Fails To Stem US Election Misinformation – Report

Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…

1 day ago

Google Fined More Than World’s GDP By Russia

Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…

1 day ago

Spotify, Paramount Sign Up To Use Google Cloud ARM Chips

Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…

2 days ago

Meta Warns Of Accelerating AI Infrastructure Costs

Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…

2 days ago

AI Helps Boost Microsoft Cloud Revenues By 33 Percent

Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…

2 days ago