Peter Godden, VP EMEA at Zerto speaks to Silicon about the ways data centres can be effected by the winter weather, the critical components of a disaster recovery strategy and the benefits of going with cloud.
Old ways of protecting data centres against volatile weather conditions are no longer adequate in today’s business world, but by enabling replication to hypervisors, storage and the cloud, organisations can decide in each situation where the best place for their data to live will be.
Disaster recovery strategies through a public or hybrid cloud environment allow organisations to be proactive in the face of harsh weather conditions. For those situations where predicting the severity of a storm is inaccurate, organisations can still react quickly within minutes. Lacking the infrastructure dependencies that prevent easy movement, critical applications can securely live and move between multiple on-premises and cloud environments.
As more and more IT teams find themselves working in the cloud, why not use this to your advantage when faced with threatening weather?
By monitoring for these inclement weather patterns, IT teams can ”get ahead of the storm” and move their data and applications before the damage hits.
This sort of proactive movement of data is impossible with a traditional data centre, of course, but for those organisations embracing a virtual, cloud-ready IT environment, it is a reality.
Organisations need to be prepared for any data centre outage, whether it is from human error, deliberate sabotage or a natural disaster. Data loss and downtime will have enormous repercussions for any business and organisations need to make sure they have a business continuity and disaster recovery (BC/DR) plan in place that will allow them to recover data from moments before an outage.
Being able to recover up-to-date mission critical files and applications within a matter of minutes can protect organisations and act as a form of insurance against data centre outages.
Imagine, for example, being unable to process retail transactions due to a website crash, or other back office IT issues. Retailers stand the risk of losing significant revenue with every minute their critical systems are not operational. And this doesn’t just affect retailers.
All businesses need to rigorously test their BC/DR strategy and the underlying technology that supports IT resilience, which will allow them to be up and running within minutes, for when – not if – a crash or widespread outage occurs.
Automate – The best kind of DR plan is an automated one. Manual failover processes often involve various components and each manual stage allows for human error. In the event of a disaster, most people will be panicking, which enhances the likelihood of mistakes being made. An automated failover guarantees repeatability and consistency ensuring fast, seamless and predictable recovery.
Don’t just failover, failback – As well as having a well-documented failover process, it is important to remember to create and follow a well-documented failback process. Business continuity might be the main focus during possible down time, but the next step will be the demand for data to be recovered and transferred from the DR site to the production site.
Often, organisations have not documented the failback process or they know they cannot successfully failback. This can seriously hurt the business operations and leaves them more vulnerable to a variety of disruptions. The failback should not be more complicated than the original failover. Organisations should fully test production failover and failback, to be completely confident it will work.
Finally, have a plan C for when you’re DR site goes down.
The possibility that your DR site might go down at the same time as your core site, is rarely considered. Yet, a wide ranging power outage could take out both sites, regardless of the distance between them.
This is why it is critical to have Plan A, Plan B and Plan C. For example, to avoid situations like this, you could recover to a public cloud provider which can offer greater geographical diversity.
Over the last few years we’ve seen previous predictions around increased public cloud adoption come to fruition, and we predict 2017 will be the year hybrid cloud asserts itself as the dominant cloud environment.
Cloud spending will continue to increase, and we believe a majority of that spend will go toward hybrid cloud infrastructures; this is proving to be the sweet spot for the enterprise. Organisations that have spent a lot of time and resources on their own data centre are not likely to do away with it all overnight.
Adopting a hybrid cloud environment allows for a transition to cloud in a way that feels most comfortable; a gradual approach that can provide both immense cost savings as well as recovery benefits. Hybrid cloud allows for a variety of recovery options should the need arise, on-premises, public cloud or a little of both, which helps companies be better prepared for a variety of disaster scenarios.
Additionally, the perceived complication and expense of transitioning to cloud, that has previously held many IT organisations back, is now starting to whither. More and more companies are realising that adopting a hybrid cloud approach, with the right partners in place, can actually be quite simple and affordable.
Are you a cloud aficionado? Try our quiz!
Fourth quarter results beat Wall Street expectations, as overall sales rise 6 percent, but EU…
Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…
Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…
Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…
Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…
Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…