Amazon Web Services is experiencing problems in the East Coast region, which in turn is hurting many Web sites. A spokesperson refutes claims that Anonymous caused the outage, saying no attack took place on Amazon's cloud.
Shara TibkenFormer managing editor
Shara Tibken was a managing editor at CNET News, overseeing a team covering tech policy, EU tech, mobile and the digital divide. She previously covered mobile as a senior reporter at CNET and also wrote for Dow Jones Newswires and The Wall Street Journal. Shara is a native Midwesterner who still prefers "pop" over "soda."
Amazon's cloud service is experiencing an outage in the East Coast region, taking down popular sites like Reddit and Airbnb.
The outage has lasted for several hours, limiting access to popular Web sites and causing problems for small businesses and other Amazon cloud users. Along with Reddit and Airbnb, Heroku, FastCompany, Flipboard and others have been impacted.
Amazon isn't saying what happened, but a spokesperson said the problem isn't due to an attack, as some have speculated. A member of hacker group Anonymous claimed responsibility for the outage via a tweet, but the Amazon spokesperson said that's not accurate and that no attack occurred.
The Amazon Web Services status dashboard has shown some performance issues in its cloud located in Northern Virginia since about 10 a.m. PT. That in turn is impacting many sites located in the Eastern U.S., with many startups notifying users via Twitter.
Amazon's dashboard has said the East Coast region is experiencing degraded performance for its elastic compute cloud, which basically means certain customers can't currently get online access. Customers also are facing connectivity issues with Amazon's relational database service in Northern Virginia and "elevated API failures and delays" in its "elastic beanstalk," also in Virginia.
The most recent relational database update, at 3:51 p.m. PT, says Amazon is "making steady progress" in recovering connectivity for the impacted areas. As for the degraded performance of the cloud, Amazon said at 3:48 p.m. PT that it's continuing to work to restore the remaining volumes.
"We have been able to increase the rate of recovery in the last thirty minutes and hope to have the majority of the remaining volumes recovered shortly," the company said.
Things aren't looking quite as good for the API failures and delays, though, with Amazon noting at 2:05 p.m. PT that it continues to see delays with launching, updating, and deleting environments.
The outage isn't the first Amazon has experienced. In June, one such outage impacted Netflix, Pinterest, and Instagram, among others.
Here's the full rundown of Amazon updates for its elastic compute cloud. Check out the dashboard for other updates.
10:38 AM PDT We are currently investigating degraded performance for a small number of EBS volumes in a single Availability Zone in the US-EAST-1 Region.
11:11 AM PDT We can confirm degraded performance for a small number of EBS volumes in a single Availability Zone in the US-EAST-1 Region. Instances using affected EBS volumes will also experience degraded performance.
11:26 AM PDT We are currently experiencing degraded performance for EBS volumes in a single Availability Zone in the US-EAST-1 Region. New launches for EBS backed instances are failing and instances using affected EBS volumes will experience degraded performance.
12:32 PM PDT We are working on recovering the impacted EBS volumes in a single Availability Zone in the US-EAST-1 Region.
1:02 PM PDT We continue to work to resolve the issue affecting EBS volumes in a single availability zone in the US-EAST-1 region. The AWS Management Console for EC2 indicates which availability zone is impaired.
EC2 instances and EBS volumes outside of this availability zone are operating normally. Customers can launch replacement instances in the unaffected availability zones but may experience elevated launch latencies or receive ResourceLimitExceeded errors on their API calls, which are being issued to manage load on the system during recovery. Customers receiving this error can retry failed requests.
2:20 PM PDT We've now restored performance for about half of the volumes that experienced issues. Instances that were attached to these recovered volumes are recovering. We're continuing to work on restoring availability and performance for the volumes that are still degraded.
We also want to add some detail around what customers using ELB may have experienced. Customers with ELBs running in only the affected Availability Zone may be experiencing elevated error rates and customers may not be able to create new ELBs in the affected Availability Zone. For customers with multi-AZ ELBs, traffic was shifted away from the affected Availability Zone early in this event and they should not be seeing impact at this time.
3:48 PM PDT We are continuing to work to restore the remaining affected EBS volumes and the instances that are attached to them. We have been able to increase the rate of recovery in the last thirty minutes and hope to have the majority of the remaining volumes recovered shortly.
And the meantime, here are some tweets about the outage:
Fast Company Chief Technology Officer Matt Mankins:
Updated at 4:25 p.m. PTwith updated outage details and a comment from Amazon.