X

Flickr outage reveals site's scale

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland

Doubtless many Flickr customers were not happy with the site's problems Monday, in which the Yahoo-owned site displayed the wrong images. Perhaps more interesting than the explanation of what went wrong, however, was the revelation of the volume of traffic the photo-hosting site handles.

"Flickr serves hundreds of millions of photos each day. On the highest traffic days, just over a billion photos are served," said Flickr developer Eric Costello in a blog entry about the problem.

That's a lot of photos--more than 11,574 per second on busy days.

Costello also took pains to apologize for the problems and assure Flickr users that no files were corrupted and that the site wasn't hacked. Instead, the problem lay with flawed information served up by caching servers, an army of helper machines that accelerate distribution of data housed on primary servers.

"Tonight's problem was a result a few of the photocaches going berserk and instead of returning the correct image file when a particular photo was being requested, it just returning some random image that happened to be in the cache," Costello said. "To be clear, we regard this as a serious problem, but it is something that goes away as soon as we restart the malfunctioning servers (Tonight we found that the servers were going insane again shortly after restarting, but we have isolated the problem and believe we have a permanent fix)," Costello said.