While people are said to be the heart and soul of a business, the modern enterprise also lives or dies on the success of its business applications. The vast majority of business processes within an organisation are underpinned by powerful business applications; whether taking an order and fulfilling, or requesting holiday. As a result, any problems with these applications will typically result in some form of financial or productivity loss.

As an extra complication, such applications do not stand still. Whether patching for security, upgrading for greater capabilities, or even integrating an entirely new application, enterprises are constantly making changes to the application environment. Balancing this need to upgrade applications with the need for the modern businesses to be ‘always-on’ can be a difficult process for businesses doing everything possible to avoid application downtime.

Complete confidence

Patching existing applications is by far the most common action enterprises will perform. Microsoft’s Patch Tuesday has become an IT tradition, while every other application a business uses will also need its own regular patches to guarantee security. Whilst updating existing applications, or deploying new ones, is a rarer occurrence, in a survey by the Enterprise Study Group 21 percent of organisations were planning new deployments or updates in the near future.

Regardless of the scale, an enterprise needs to be certain that any change to its application environment will both work as planned, and be simple to implement. Testing is a crucial part of this. By putting the change into effect in a non-production environment first, IT teams can be certain that they know how to make the process as painless as possible, as well as ensuring that the change won’t have any unexpected effects on the production environment.

Traditionally, long periods of downtime over the weekend or in the evening meant that production IT infrastructure could be used for testing without interrupting end users. However, increased globalisation and s shift towards flexible working hours have changed this. The modern business is a 24/7, always-on operation. Applications need to be constantly available, meaning that the natural grace periods of organisation-wide downtime are rapidly becoming extinct. Instead, IT teams are facing a growing gap between the need to test and guarantee the success of a patch or other change, and the need for the business to be constantly running.

Two ends of the scale

While a new application deployment will always deserve thorough testing, for patches and upgrades the case can be less clear-cut. Generally, an enterprise will have to make a choice lying between one of two extremes.

First is to proceed with testing in the traditional style: taking over the production environment while patches or other changes are tested. The IT team can fully work out any kinks in the system, accepting that end users will be unable to access applications during this time. While this will give confidence that the application will behave as expected, it will also produce the longest periods of downtime. As a result, the CIO will have to convince the business that the benefit gained by any change outweighs the actual and potential losses caused by missed opportunities or reduced productivity.

The second extreme is to roll out the change with minimal testing; instead fixing any issues when the application is live. While this will result in the shortest downtime for the enterprise, it barely needs saying that this can easily become a false economy. Without testing, the likelihood of encountering problems either during or after the roll-out is massively increased. The cost of a critical application not working as planned post-change can be crippling.

To avoid this, most organisations will need to walk a middle road between these extremes, depending on how extreme the change and how business-critical the application is. At the far end of the scale, minor patches may take minimal testing, or even be rolled out in faith that the vendor has performed a thorough trial and that the enterprise can swiftly spot and deal with any issues. Organisations will also keep the option of rolling back applications to a pre-patch or upgrade status, assuming that if the worst does come to the worst, they can at least return to a known, stable configuration.

Where there’s a will…

None of these solutions is ideal: regardless of the precise route they take, enterprises will always be trading availability with peace of mind and finding a balance that they can live with. However, this should not be an issue. In less than a decade there have been advances in technology and technique that allow organisation to thoroughly test their applications without affecting the availability of their services.

To begin with, as the costs of storage fall and servers become increasingly commoditised, it is increasingly more affordable to create a separate testing infrastructure where application roll-outs can be perfected without affecting the production environment. This doesn’t even have to mean investing in a separate infrastructure, virtual or otherwise, purely for testing. For instance, the average enterprise will have a huge amount of as-yet unused backup infrastructure that is essentially free to exploit. With modern tools, enterprises can repurpose this as a temporary testing infrastructure: further saving on the cost of setting up a dedicated testing environment.

While space should not be an issue, there is still the case that creating the testing infrastructure can take up valuable time for the IT team and demand specific expertise. Yet more and more of these processes can be automated to some degree. For example, data protection techniques, such as replication and high-speed backup, are becoming far more affordable and accessible to enterprises: this means creating a replica testing infrastructure is as simple as performing a backup. As a result, enterprises can quickly set up testing infrastructure, of any scale, that is identical to the production environment as and when they need it. The speed of backup also means that rolling back these environments for repeated tests is a far simpler process.

If enterprises take advantage of these capabilities, they will have confidence that any application updates, patches or deployments have been thoroughly tested beforehand, and that IT availability has not been affected. The technology to make this process work exists, what enterprises need is the will to make it happen. With this, they will find that the gap between the confidence and availability they need, and that they can provide, becomes increasingly smaller.

Duncan Macrae

Duncan MacRae is former editor and now a contributor to TechWeekEurope. He previously edited Computer Business Review's print/digital magazines and CBR Online, as well as Arabian Computer News in the UAE.

Recent Posts

Apple Sales Rise 6 Percent After Early iPhone 16 Demand

Fourth quarter results beat Wall Street expectations, as overall sales rise 6 percent, but EU…

24 hours ago

X’s Community Notes Fails To Stem US Election Misinformation – Report

Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…

1 day ago

Google Fined More Than World’s GDP By Russia

Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…

1 day ago

Spotify, Paramount Sign Up To Use Google Cloud ARM Chips

Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…

2 days ago

Meta Warns Of Accelerating AI Infrastructure Costs

Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…

2 days ago

AI Helps Boost Microsoft Cloud Revenues By 33 Percent

Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…

2 days ago