The internet giant's status page says the breakdown began at 1324 PDT (2030 UTC) on June 10, and initially caused “connectivity issues for some EC2

AWS Frankfurt experiences major breakdown that staff couldn’t fix for hours due to ‘environmental conditions’ on data centre floor

submited by
Style Pass
2021-06-11 14:30:06

The internet giant's status page says the breakdown began at 1324 PDT (2030 UTC) on June 10, and initially caused “connectivity issues for some EC2 instances.”

Half an hour later AWS reported “increased API error rates and latencies for the EC2 APIs and connectivity issues for instances … caused by an increase in ambient temperature within a subsection of the affected Availability Zone.”

By 1436 PDT, AWS said temperatures were falling but that network connectivity remained down. But an hour later, the cloud colossus offered the following rather unsettling assessment:

While temperatures continue to return to normal levels, engineers are still not able to enter the affected part of the Availability Zone. We believe that the environment will be safe for re-entry within the next 30 minutes, but are working on recovery remotely at this stage.

At 1633, network services were restored, an event AWS said should lead to swift resumption of EC2 instances. A 1719 update stated “environmental conditions within the affected Availability Zone have now returned to normal level,” and advised users that “the vast majority of affected EC2 instances have now fully recovered but we’re continuing to work through some EBS volumes that continue to experience degraded performance.”

Leave a Comment