Amazon's S3 cloud storage service hits 1 trillion files
Can cloud storage scale? You bet. Amazon's Jeff Barr notes this morning that its S3 online storage has blown past one trillion objects.
Amazon's Jeff Barr noted this morning that its S3 online storage service reached 1 trillion objects, or files, last week, impressive growth for a service that launched in 2006.
Barr writes on the company's Amazon Web Services blog:
That's 142 objects for every person on Planet Earth or 3.3 objects for every star in our galaxy. If you could count one object per second it would take you 31,710 years to count them all.
He added that the object count has been growing by up to 3.5 billion objects in a single day, or approximately 40,000 new objects per second. In its first year, it took the company many months to see that much growth; its most recent announcement was 905 billion objects in April.
All this, despite an object expiration feature that has removed some 125 billion objects since it was launched late last year. "In other words, even though we've made it easier to delete objects, the overall object count has continued to grow at a very rapid clip," Barr writes.
Amazon's mission for S3 (Simple Storage Service) is to make scalable, secure, and fast computing easier for developers by allowing them to store and retrieve any amount of data, at any time, from anywhere on the Web. (Who would need such a thing? Web applications providers, media content providers, data analysis fiends, and those who need to backup and archive data for disaster recovery.)
By flexing the muscle of its extensive cloud infrastructure, Amazon seeks to pass along the benefits of economies of scale to S3 customers. Each new milestone underscores its point and drives down the price-per-terabyte for the customer. "Adding nodes to the system increases, not decreases, its availability, speed, throughput, capacity, and robustness," it says.
This story was first published as "Amazon S3 hits one trillion objects" at ZDNet's Between the Lines.