Five full days after its largest outage hit on the morning of 21 April, Amazon Web Services said it finally has restored virtually all services to its customers.
However, there still are a lot of smoldering IT managers who haven’t yet cooled off completely from the outage that started at 1.41am PDT on 21 April at the AWS data centre in Northern Virginia.
Income that AWS-hosted businesses lost during that one- to five-day window of time will never be regained. This was a serious business problem for hundreds, perhaps thousands of IT managers, who are now wondering whether to continue using the service.
“EBS is now operating normally for all APIs and recovered EBS volumes,” Amazon reported 25 April on its status dashboard. “The vast majority of affected volumes have now been recovered. We’re in the process of contacting a limited number of customers who have EBS volumes that have not yet recovered and will continue to work hard on restoring these remaining volumes.” The company said it will post a detailed incident report.
What are industry people saying in the wake of the mishap? What might be the long- and short-term results of an outage that shackled one of the sturdiest, most trusted web services providers in the world?
Several AWS users commented with frustration on eWEEK stories covering the mishap. The blogosphere, as one might imagine, was rife with commentary.
“In short, if your systems failed in the Amazon cloud this week, it wasn’t Amazon’s fault,” blogged O’Reilly Media’s George Reese. “You either deemed an outage of this nature an acceptable risk or you failed to design for Amazon’s cloud computing model. The strength of cloud computing is that it puts control over application availability in the hands of the application developer and not in the hands of your IT staff, data centre limitations, or a managed services provider.
“The AWS outage highlighted the fact that, in the cloud, you control your SLA in the cloud — not AWS.”
Morphlabs was one of the first AWS solution providers when it launched Morph Appspace in 2007 and now has more than 4,000 users.
“The Amazon EC2 outage has sent ripples and shockwaves through the AP wires and blogosphere, but those of us who have been in the cloud computing trenches for the equivalent of tech eons (at Morphlabs, we’ve been at it for more than four years), the news is neither shocking nor a reason to stray from our mission,” founder and CEO Winston Damarillo told eWEEK.
“While it is tempting to unleash common fears about new technologies when confronted with ‘proof’ of their failings and risks, our years of innovation and adoption tell us that there is a wiser path. Approach with caution, but approach nonetheless. The same is true for the implementation of cloud computing services in your IT organisation.”
Morphlabs’ approach to software development assumes failure, and it builds fault tolerance into all of its cloud computing solutions, Damarillo said.
Ed Laczynski, vice president of cloud strategy and architecture at Datapipe, a New Jersey-based provider of managed IT and hosting services that uses AWS for one of its offerings, told eWEEK that the AWS story “shows how important it is to think about engineering when you’re designing systems for the cloud.”
“If you look at the documentation, best practices and so on of the people doing it [cloud] best, they’re all designing for failure [to happen]. For us, it was an opportunity to test that concept. Our customers that are deployed on AWS suffered only minimal disruption, if any at all, because we designed for it.”
Lydia Leong of Gartner Research wrote in an advisory that Amazon EC2 didn’t actually violate its service-level agreement when the outage occurred.
“Amazon’s SLA for EC2 is 99.95 percent for multi-AZ deployments,” Leong wrote. “That means that you should expect that you can have about 4.5 hours of total region downtime each year without Amazon violating its SLA.
“Note, by the way, that this outage does not actually violate their SLA. Their SLA defines unavailability as a lack of external connectivity to EC2 instances, coupled with the inability to provision working instances. In this case, EC2 was just fine by that definition. It was Elastic Block Store [EBS] and Relational Database Service [RDS] which weren’t, and neither of those services have SLAs.”
Finally, in the midst of all the pain that IT managers had to endure these last five days, there came a bit of humour.
On the RationalSurvivability website, hosted on AWS, blogger Christofer Hoff poked a little fun at the situation by reworking the following new lyrics to Don McLean’s folk-rock classic, “American Pie”:
“A long, long time ago …
I could launch an instance
How that AMI used to make me smile
And I knew if I needed scale
that I’d avoid that fail whale
though I knew that I was in denial
“But April 20 made me shiver
Amazon did not deliver
Bad news – oh what a mess
auto-cloning E B S …”
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…