Take a trip down memory lane to Google's first data center

Google's eighth employee, Urs Hölzle, shares his experiences at the company's first data center, back when it was less a data center and more a tiny closet surrounded by competitors.

An invoice from Google's first data center, a closet-sized space at the Exodus data center in Santa Clara, Calif. Urs Hölzle/Google

Before Urs Hölzle became Google's first chief engineer, he took a tour of the company's server room at the Exodus data center in Santa Clara, Calif. Not yet a Google employee, Hölzle was taken there by Google co-founder Larry Page on February 1, 1999, on possibly the shortest Google data center tour of all time.

"You couldn't really 'set foot' in the first Google cage because it was tiny," Hölzle said via Google+ on Tuesday, almost 15 years to the day since that tour. Hölzle continues to work at Google as senior vice president of technical infrastructure and as a Google Fellow.

The cage was 7 feet by 4 feet with 30 personal computers arranged on the shelves providing the world with more Google than it could handle.

Because of the way the data center was arranged at the time, many of the hottest companies in Silicon Valley had their servers sitting on top of one another.

"Our direct neighbor was eBay, a bit further away was a giant cage housing DEC / Altavista, and our next expansion cage was directly adjacent to Inktomi," he said. Exodus was one of the first co-location facilities in Silicon Valley.

Google's server structure was designed so that a1 through a24 built and served the main index, while c1 through c4 crawled the Internet.

Google co-founder Sergey Brin jumped into the Google+ conversation to add details.

"We skipped 'b' because 'c' stood for crawl," Brin said. "I then decided to skip 'e' because I figured it sounded too much like 'd' and would be confusing, though of course we later adopted all the other similar sounding letters anyway."

Brin's comment is No. 17. (Google+ doesn't allow for direct links to comments, only the original post.)

Brin added details to the fly-by-night operation of Google's first off-site servers:

A quick footnote to the 'a' machines: we improvised our own external cases for the main storage drives including our own improvised ribbon cable to connect 7 drives at a time (we were very cheap!) per machine. Of course, there is a reason people don't normally use ribbon cables externally and ours got clipped on the edge while we ferried these contraptions into the cage. So late that night, desperate to get the machines up and running, Larry did a little miracle surgery to the cable with a twist tie. Incredibly it worked!

It's not entirely clear, but it sounds like by the time that Hölzle joined Google, the company had its second server cage adjacent to the first. It contained Google's first four server racks, each with 21 machines labeled d1 through d42 and f1 through f42. Brin noted in his comment that they were probably made by Kingstar, running on a single motherboard and a Pentium II CPU.

Urs Hölzle, senior vice president for technical infrastructure at Google, announces Google Compute Engine at the Google I/O conference in 2012.
Urs Hölzle, senior vice president for technical infrastructure at Google, announces Google Compute Engine at the Google I/O conference in 2012. Stephen Shankland/CNET

The 'g' rack introduced soon thereafter became the first of Google's famous "corkboard" racks.

Longtime Google employee and the Webspam team leader Matt Cutts chimed in at comment No. 23 to note that Google's main ads database was on one machine back then, f 41.

As the invoice above shows, it cost Google $1,400 per month per megabit per second of data, and it had to purchase two Mbps at a time. Hölzle said that Google traffic reached that level in the summer of 1999, when one Mbps equaled around 1 million queries per day."

The deal included a complimentary number of "reasonable" reboots per month.

Google co-founders Larry Page and Sergey Brin, when they were younger. Google

The invoice also reveals that Google was able to get a discount on some of its bandwidth.

"You'll see a second line for bandwidth, that was a special deal for crawl bandwidth. Larry had convinced the salesperson that they should give it to us for "cheap" because it's all incoming traffic, which didn't require any extra bandwidth for them because Exodus traffic was primarily outbound," Hölzle said.

The handwritten note at the bottom of the invoice, "3 20 amps in DC," is important, he said, because at the time data center space was sold by the square foot. "We always tried to get as much power with it as possible because that's what actually mattered."

Although the Exodus data center has been shut down, many of its artifacts live on in the Computer History Museum in Mountain View, Calif.

Correction 10:40 a.m. PT: This post has been updated to correct and clarify the 1998 data pricing as noted on the invoice above.

 

Join the discussion

Conversation powered by Livefyre

Don't Miss
Hot Products
Trending on CNET

HOT ON CNET

Mac running slow?

Boost your computer with these five useful tips that will clean up the clutter.