Is a 'global superorganism' in our future?

Not many would argue against the proposition that the sum of the world's connected computational devices represents a new page. Figuring out what's written down is the harder part.

Charles Cooper Former Executive Editor / News
Charles Cooper was an executive editor at CNET News. He has covered technology and business for more than 25 years, working at CBSNews.com, the Associated Press, Computer & Software News, Computer Shopper, PC Week, and ZDNet.
Charles Cooper
3 min read

I'm catching up after a week's vacation to places which, I'm happy to report, still don't speak Internet. So pardon for being late to comment, but Kevin Kelly's latest piece, "Evidence of a Global SuperOrganism" is a must read.


Kelly's post is nuanced and complex and I hesitate to reduce his thesis to a simple (and simplistic) summary. Suffice it to say, though, he posits the ultimate emergence of a global digital superorganism. His point of departure is the uncontroversial assumption that the sum of the world's connected computational devices creates what essentially is a "superorganism of computation with its own emergent behaviors."

"I define the One Machine as the emerging superorganism of computers. It is a megasupercomputer composed of billions of subcomputers. The subcomputers can compute individually on their own, and from most perspectives these units are distinct complete pieces of gear. But there is an emerging smartness in their collective that is smarter than any individual computer. We could say learning (or smartness) occurs at the level of the superorganism.

But this transformation remains a work in progress. Kelly suggests that the One Machine will pass through four developmental levels, en route from its beginnings as a "plain superorganism" into something approaching consciousness. These phases include:

•  I. A manufactured superorganism

•  II. An autonomous superorganism

•  III. An autonomous smart superorganism

•  IV. An autonomous conscious superorganism

In one respect, his argument reminded me of Ray Kurzweil's writings on how machine intelligence, represented by the totality of information-based technologies, will eventually outnumber human intelligence. The idea being a merger of our biological existence with technology. Here is how Kurzweil puts it in The Singularity is Near:

It's a future period during which the pace of technological change will be so rapid, its impact so deep, that the human life will be irreversibly transformed...Our version 1.0 biological bodies are likewise frail and subject to a myriad of failure modes, not to mention the cumbersome maintenance rituals they require. While human intelligence is sometimes capable of soaring in its creativity and expressiveness, much human thought is derivative, petty, and circumscribed. The Singularity will allow us to transcend these limitations of our biological bodies and brains."

"We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want (a subtly different statement from saying we will live forever). We will fully understand human thinking and will vastly extend and expand its reach. By the end of this century, the non-biological portion of our intelligence will be trillions of trillions of times more powerful than unaided human intelligence."

There's obviously no small amount of disagreement about the likely direction this will take. Letting your imagination go entirely, one might even construct a science fiction outcome in which the machines take control and snuff out the human race. The apocalyptic finish makes for the flashier headline but I thought Nova Spivack had as good an idea as any I've seen about where this is heading.

"Because humans are the actual witnesses and knowers of what the OM does and thinks, the function of the OM will very likely be to serve and amplify humans, rather than to replace them. It will be a system that is comprised of humans and machines working together, for human benefit, not for machine benefit. This is a very different future outlook than that of people who predict a kind of "Terminator-esque" future in which machines get smart enough to exterminate the human race. It won't happen that way. Machines will very likely not get that smart for a long time, if ever, because they are not going to be conscious. I think we should be much more afraid of humans exterminating humanity than of machines doing it."

Your take?