Amazon Web Services (AWS) was hit with a service interruption on Sunday, 25 August, that caused four hours of degraded service for customers of its US-EAST data centre availability zone and knocked a number of virtual machine instances offline. The degraded service was the result of an issue with a single networking device that failed, according to the company.
The first public acknowledgment from Amazon that there was some trouble with its cloud infrastructure came at 1:22 p.m. PDT on Sunday afternoon.
“We are investigating degraded performance for some volumes in a single AZ in the US-EAST-1 Region,” an Amazon AWS status update reported.
The US-EAST-1 Region is a set of Amazon data centres located in Northern Virginia. Amazon refers to its data centres as “Availability Zones” (AZs). The purpose of the AZ concept is to have geographically disparate fault tolerance and stability on a global basis.
As it turns out, although Amazon did not report any trouble via its status update feeds for US-EAST-1 until 1:22 p.m. PDT on Sunday, the issue actually started approximately 30 minutes earlier. Amazon did not provide full details on the incident until 3:23 p.m. PDT, at which point an AWS status update noted, “From approximately 12:51 PM PDT to 1:42 PM PDT network packet loss caused elevated EBS-related API error rates in a single AZ.”
EBS is Amazon’s Elastic Block Storage service and provides persistent storage to virtual machines running on the Amazon cloud. Amazon noted that a “small” number of its cloud customers had virtual machine instances that became unreachable due to the EBS error. Among the sites that were impacted on Sunday afternoon were Airbnb, Instagram, Flipboard and Vine.
“The root cause was a ‘grey’ partial failure with a networking device that caused a portion of the AZ to experience packet loss,” Amazon noted in its status update.
Amazon physically removed the failed networking device in order to restore service in US-EAST-1 to normal. It was not until 6:58 p.m. PT that Amazon’s status update gave the all clear, indicating that normal performance had been restored.
The US-EAST-1 issue on Sunday is not the first time that Amazon has had trouble with that data centre. In 2012, storms knocked off power to Amazon’s East Coast availability zones, leaving the service unavailable. There was also an incident in 2011 that hit the Virginia-based East Coast AZs.
The whole concept behind the AZs, though, is to help customers mitigate the risk of an outage in any one geographical area.
“When you launch an instance, select a region that puts your instances closer to specific customers, or meets the legal or other requirements you have,” Amazon’s AZ documentation states. “By launching your instances in separate Availability Zones, you can protect your applications from the failure of a single location.”
Are you a Google expert? Take our quiz!
Originally published on eWeek.
Former UK deputy prime minister Nick Clegg to leave global affairs post at Meta ahead…
Less-redacted Utah lawsuit says TikTok internal reviews found Live feature effectively incentivised abuse of minors…
Microsoft planning to spend $80bn on data centres for AI and other cloud applications this…
China EV giant BYD surpasses Tesla on worldwide EV deliveries in fourth quarter of 2024…
US Treasury sanctions Beijing-based Integrity Tech over alleged links to state-backed hacking group 'Flax Typhoon'
London hacker received suspended sentence after making £42,000 from sale of unreleased tracks from Coldplay,…