Google is becoming a computer systems company
The search giant is building, rather than buying, increasingly large chunks of its hardware and software stack.
I've previously speculated whether those running the mega-datacenters that deliver more and more of our applications--especially in the consumer space--might not also increasingly write their own platform software and construct their own hardware. In the general case, the jury is still out. And there are some counterexamples. For instance, Yahoo has largely shifted from running a FreeBSD variant that it supported internally to the commercial Red Hat Enterprise Linux distribution.
But, however the general trend plays out, it's clear that Google is increasingly going its own way. It already extensively customizes Linux and other open-source software for its internal use. Eben Moglen of the Software Freedom Law Center, among others, has been critical of what he sees as Google's relatively stingy contributions back to open-source projects in general. And although Google doesn't design its own processors, it does source custom motherboards from Intel that it uses to build many of its own servers.
Over the past couple of weeks, we've seen two more stories--one in hardware, one in software--that further highlight Google's increasingly vertical integration.
First is the story from Andrew Schmitt of Nyquist Capital who posts that:
It is our opinion that Google (GOOG) has designed and deployed home-grown 10GbE switches as part of a secret internal initiative that was launched when it realized commercial options couldn't meet the cost and power consumption targets required for their data centers...
What is interesting about Google's approach is that it has eschewed traditional 10GBASE optical standards and instead adopted off-standard solutions that better suit its needs for time-to-market, power and port density, and cost. While Google makes use of the SFP+ cage format, it does not use the receive dispersion compensation (EDC) function typically associated with SFP+. Instead Google is looking to employ a combination of twinax cabling for short reach (<10m) intra-rack cabling and a motley 850nm SR-like standard. Off the shelf SR optical modules appear to work well up to 100m over without receive equalization. Ironically, Finisar (FNSR) proposed such a solution several years ago.
If true, that's two major components of datacenter infrastructure--servers and switches--that Google will be primarily buying rather than building.
The latest announcement concerns Android, Google's mobile phone software platform. As Nancy Gohring of the IDG News Service writes:
Instead of using the standards-based Java Micro Edition (JME) as an engine to run Java applications, Google wrote its own virtual machine for Android, calling it Dalvik. There are technical advantages and disadvantages to using Dalvik, developers say, but technology may not have been the driver for Google.
CNET.com's Stephen Shankland provides further background. The details are messy in both their technology implications and their politics. However, the bottom line is that, in yet another case, Google appears to be voting for bespoke in-house development of some part of the hardware and software stack.
Google is, to be sure, Google. It's unique in many respects--both in its market position and in many of its attitudes. As such, I take what Google is doing as an interesting data point but hardly rock-solid evidence of where the industry is headed.
That said, if these are truly smart moves for Google...If they bring it unique cost or function advantages--and aren't merely reflections of a mostly harmless Google corporate personality quirk--then how can companies like Microsoft and Yahoo not head down a similar path?