Want CNET to notify you of price drops and the latest stories?

A look into the mind-bending Google Glass of 2029

How far-fetched is it, really, to go from today's Google Glass to nanobots communicating between your brain and a Google cloud that is indistinguishable from a human?

6 min read
Google's brain in the cloud, also known as a data center. (Credit: Google)

When Google Glass made its first public appearance on April 4, 2012, it signaled the beginning of a new era of computing. Consider the precedent: In the span of half a decade, the computer moved from the desktop to the pocket, and now with Glass it is moving to the head, on its way to eventually integrating itself inside the human body.

Ray Kurzweil, Google's director of engineering, calls Glass a "solid first step" along the road to computers that rival and then exceed human intelligence. Kurzweil, who is also an accomplished inventor and futurist, predicts that by 2029 computers will match human intelligence, and nanobots inhabiting our brains will create immersive virtual reality environments from within our nervous systems:

If you want to go into virtual reality the nanobots shut down the signals coming from your real senses and replace them with the signals that your brain would be receiving if you were actually in the virtual environment. So this will provide full-immersion virtual reality incorporating all of the senses. You will have a body in these virtual-reality environments that you can control just like your real body, but it does not need to be the same body that you have in real reality. We'll be able to interact with people in any way in these virtual-reality environments. That will replace most travel, but we'll also have new travel technologies for our real bodies using nanotechnology.

As a Google Director of Engineering Ray Kurzweil is working on improving computer understanding of natural language. As Ray Kurzweil the author of 'The Singularity Is Near: When Humans Transcend Biology,' he is working to reverse engineer the human brain. Bloomberg via Getty Images

Further down the road people will be uploading their entire brains to computers, Kurzweil said. The human brain will gain additional thinking power, expanding the neocortex into the compute cloud in the 2030s, Kurzweil said, accessing trillions of new concepts and experiences at speeds much faster than the biological brain. The fusion of digital and biological parts will enable a qualitative leap for humans based on a quantitative expansion of thinking, according to Kurzweil.

It's not clear whether Google's co-founders fully buy into Kurzweil's view of technology evolution or his notion of "Singularity," a prediction that around 2045 intelligence will become more nonbiological and trillions of times more powerful, and any distinction between humans and machines, so-called reality and virtual reality will be erased.

But, it wouldn't be out of character for Google co-founders Larry Page and Sergey Brin to consider moon shots like Google's servers with direct and assistive connections to your brain, as they have for self-driving cars. It's mind bending to think about the implications, but it seems possible that Google could monetize your brain instantaneously as it thinks.

Google's Sergey Brin is personally funding the development of in-vitro, lab-grown beef. Bloomberg via Getty Images/CNET/David Parry, PA Wire

Hunger pangs? Google's brain, cohabiting with your bio-brain, immediately flashes images of food, optimized for your health and eating pleasure, based on data from the sensors capturing your vital signs, data from anonymized individuals with similar profiles, your refrigerator's contents and super-targeted ad inventory.

The image that elicited the biggest autonomic response is ordered from a local eatery, or if you are part of the DIY movement, it will display a recipe with preparation instructions from your tiny Glass eye embedded in your retina or visual cortex. Alternatively, it could be prepared by a robot or even formulated on the spot from base chemistry by nanobots. Google receives payment for various contextual ads and offers that are part of the human-computer data flow across the indistinguishable virtual and real worlds.

Biologically inspired software?
Coming back to the present, Kurzweil's tenure at Google to date doesn't yet appear to include merging the human brain with the Google cloud or creating a future version of Glass the size of a blood cell that runs through your brain capillaries.

He came to Google late last year with the more modest charter of improving Google computers' understanding of natural language, which is a prerequisite for artificially intelligent computers that pass for human. It's part of a Google's effort to move to "conversational search," where it's possible to have speech as the primary input for a device.

"We are developing software that is biologically inspired and uses the lessons that biological evolution learned in evolving the human brain and neocortex to create intelligent machines," Kurzweil said.

Google has a well-established research program for developing artificial intelligence. Applying design principles from neural networks, Google engineers realized significant improvements in the quality of the speech recognition. Google has also built a large data repository, Knowledge Graph, with nearly a billion objects and billions of relationships among them as a foundation for understanding the semantic content and context of queries.

"Knowledge Graph has good coverage of people, places, things, and events, but there is plenty it doesn't know about. We are at 1 percent," John Giannandrea, director of engineering for the repository, told CNET.

Jeff Dean has been involved in many of Google's key technology projects during his 14 years at the company. Stephen Shankland/CNET

While Kurzweil and Google have moon shot ambitions for the future of Glass, it will enter a mode of incremental improvements over the next half decade. Smartphones over the last five years have become far more capable, powerful and popular each year, following the cadence of Moore's Law, but there has been no quantum leap. Over the next few years, Glass also faces a tougher adoption curve than smartphones, which are more essential for users than the wearable accessory.

For Glass to break through, natural language input and conversational search need to make quantum leaps. Google Fellow Jeff Dean says that voice search and image recognition will substantially improve the next five years.

"If you're using Google Glass, it's going to be able to look around and read all the text on signs and do background lookups on additional information and serve that. That will be pretty exciting," Dean said in an interview with TechFlash.

However, Google's brain needs to have a better understanding of natural language, which part of Kurzweil's mandate. "If we could get to the point where we understand sentences, that will really be quite powerful," Dean said. "So if two sentences mean the same thing but are written very differently, and we are able to tell that, that would be really powerful. Because then you do sort of understand the text at some level because you can paraphrase it."

A problem for search engines today is that much of the data isn't "labeled," Dean said. It doesn't offer much data to describe itself in a way would make it easier for a search engine to catalog. In addition, answers to more complicated queries require stitching together pieces of data from wildly disparate sources.

For example, a Web page doesn't exist to answer the question, "What's the Google engineering office with the highest average temperature?," Dean told TechFlash. "There's no Web page that has that data on it. But if you know a page that has all the Google offices on it, and you know how to find historical temperature data, you can answer that question. But making the leap to being able to manipulate that data to answer the question depends fundamentally on actually understanding what the data is."

Nor does Google's brain know how to book your vacation or business trip. "That's a very high-level set of instructions. And if you're a human, you'd ask me a bunch of follow-up questions, 'What hotel do you want to stay at?' 'Do you mind a layover?' - that sort of thing," Dean said. "I don't think we have a good idea of how to break it down into a set of follow-up questions to make a manageable process for a computer to solve that problem. The search team often talks about this as the 'conversational search problem.'"

Google isn't yet talking about bringing Glass into the augmented reality world of 3D and virtual reality. At present, it can take videos and pictures, send a tweet and provide notifications, but will likely enter the augmented reality realm within next five years, especially as the cost and size of processors, sensors and other components come down and the power increases.

Startups such as Meta are getting a head start on Google. With the next two years, the company expects to ship augmented-reality glasses that combine the power of a laptop and smartphone in a pair of stylish frames that map gesture-controlled virtual objects into the physical world, similar to the movie portrayals of app control via gestures in "Iron Man" and "Avatar."

But even Google Glass with 3D, augmented reality and vastly improved conversational search is still a primitive toy in Kurzweil's long view. "We'll make ourselves a billion times smarter by 2045," Kurzweil says.

In a 30-year span, computing has progressed from the Macintosh, which launched in 1984, to Google Glass. A moon shot traversing from today's Google Glass to nanobots communicating between your brain and a Google cloud that is indistinguishable from a human in the next 15 to 30 years is difficult to digest, but not that far fetched.