X

Q&A: Kurzweil on tech as a double-edged sword

Ray Kurzweil, inventor and futurist, discusses the Turing Test, human vs. machine intelligence, and the dangers of advanced technologies.

Natasha Lomas Mobile Phones Editor, CNET UK
Natasha Lomas is the Mobile Phones Editor for CNET UK, where she writes reviews, news and features. Previously she was Senior Reporter at Silicon.com, covering mobile technology in the business sphere. She's been covering tech online since 2005.
Natasha Lomas
9 min read

Ray Kurzweil has invented and commercialized a raft of innovative technologies--including a text-to-speech synthesizer, voice recognition software, and a print-to-speech reading machine for the blind--garnering a clutch of awards in the process. He has also written extensively on artificial intelligence and robotics.

In several of his published books, including The Age of Spiritual Machines and The Singularity is Near, he describes a vision of the future where machine and human intelligence are increasingly combined, augmenting each other and ultimately, in Kurzweil's view, enabling humans to become both smarter and better. "These technologies can enhance not just our intelligence but our ethical and moral sense, our emotional intelligence, and make us more exemplary of what we consider to be human," he says.

Key to understanding Kurzweil's philosophy is what he dubs "the law of accelerating returns"--or a belief that technological change has an exponential, not linear, progression, and thus information technologies which today seem to be inching forward at a snail's pace will actually reach a tipping point much faster than expected and will accelerate ever more rapidly thereafter, enabling disruptive change in the relatively near term.

"The computer in your cell phone today is a million times cheaper and a thousand times more powerful and about a hundred thousand times smaller (than the one computer at MIT in 1965) and so that's a billion-fold increase in capability per dollar or per euro that we've actually seen in the last 40 years," says Kurzweil.

"The rate is actually speeding up a little bit, so we will see another billion-fold increase in the next 25 years--and another hundred-thousand-fold shrinking. So what used to fit in a building now fits in your pocket, what fits in your pocket now will fit inside a blood cell in 25 years."

Silicon.com reporter Natasha Lomas recently caught up with Kurzweil--who finished 14th in this year's Silicon.com Agenda Setters list, to discuss his vision of a man-plus-machine future, what intelligent computers will mean for human society and jobs, and what dangers we might encounter in a world awash with advanced technology.

Q: What is the most exciting technology that you've seen in recent years?
Kurzweil: One industry that is just in the last few years transformed from a pre-information era to becoming an information technology is health and medicine. We have software that's running in our bodies that's thousands of years old or more and it evolved when conditions were very different. For example, the fat insulin receptor gene says "hold onto every single calorie in your fat cells," and that was a good idea 1,000 years ago. It's not a good idea today--it underlies an epidemic of obesity certainly in my country. And what would happen if we turned that gene off?

There are other genes that are necessary for heart disease or cancer to progress that we'd like to turn off and we've come up with a new technology, RNA interference, that can turn off selected genes. We also have new methods of adding new genes so...we can update this outdated software that runs in our bodies. We can also turn on and off enzymes and proteins and really reprogram the information processes of underlying biology--and we can design these interventions on computers rather than just try to find some substance that happens to work and we can then test them out in biological simulators.

Now all of these developments...are in an early stage but they're information technologies so they will advance exponentially not linearly. These technologies will be a thousand times more capable in 10 years, a million times more powerful in 20 years and, according to my models, we'll be adding more than a year every year not just to infant life expectancy but to your remaining life expectancy, so the sands of time will start running in rather than running out.

When will the Turing Test be passed? And what will it mean for human society?
Kurzweil: I've been quite consistent that it'll happen by 2029. I think (the rules, that a computer passes the test if it fools the judges 30 percent of the time, are) actually too lenient--in the recent test it fooled the judges 25 percent of the time. Every time they run that test the computers do a little bit better. The first reports (of a computer passing) I probably won't accept it myself...but then as time goes on the computers will pass more and more stringent sets of rules and by 2029 it'll be unarguable that computers have passed. And I do think it's a good test. It's not by the way a test of human consciousness--it's a test of human intelligence, which is something we can objectively measure even though we can argue about how to measure it.

Consciousness is not something we can readily measure in another entity. However, in order for a computer or any entity to pass the Turing Test it has to master human emotion--and human emotion is not some sideshow. What humans do well is both pattern recognition and our emotional thinking, which is a form of recognizing patterns that we find in situations. Getting the joke, being funny, expressing a loving sentiment--these are actually the most complicated things we do, the cutting edge of human intelligence.

You won't be able to walk into a room and say "OK, humans on the left, machines on the right," because it's going to be all mixed up.

In terms of the impact on society, it will be an important threshold but it won't transform things right away...because having a few more equivalents of human intelligence isn't necessarily going to change things. But because non-biological intelligence will be subject to the law of accelerating returns it will continue to progress both in hardware and software because these intelligent entities can access our source code, they can upgrade themselves. Ultimately non-biological intelligence will be much more powerful than biological human intelligence, but it's not an invasion of intelligent machines from Mars--it's coming from our own civilization. And we will use it as we do today to expand our own reach--we will make ourselves smarter. That is what is unique about human beings. We were the first species to create tools to extend our reach and then we use our tools to create more powerful tools and no other species does that.

Will super intelligent machines ever have souls?
Kurzweil: The soul is a synonym for consciousness...and if we were to consider where consciousness comes from we would have to consider it an emerging property. Brain science is instructive there as we look inside the brain, and we've now looked at it in exquisite detail, you don't see anything that can be identified as a soul--there's just a lot of neurons and they're complicated but there's no consciousness to be seen. Therefore it's an emerging property of a very complex system that can reflect on itself. And if you were to create a system that had similar properties, similar level of complexity it would therefore have the same emerging property and this would be more than an abstraction because these future entities...will be convincing.

It also won't be clear--you won't be able to walk into a room and say, "OK, humans on the left, machines on the right," because it's going to be all mixed up. You'll have biological humans but they'll have machine processes in their brain, there may be a lot more complexity in the machine intelligence in their brain than the biological portion of their brain. It's not going to be a clear distinction of where humans or biological intelligence stops and machine intelligence starts. (So) we will attribute consciousness to entities even if they have no biology, even if they're fully machine entities: they will seem human, they will seem consciousness, we will attribute souls to them but that's not a scientific statement.

In seeking to create artificial intelligence, why are we attempting to mimic the human brain when machine intelligence necessarily seems to be a very different type of intelligence?
Kurzweil: There are two different approaches to AI and both of them are showing themselves to be successful. One is just to engineer intelligent solutions without consideration of how the brain does it, which is the way we created flying machines without necessarily emulating birds. And a lot of AI---in fact most of it in use today--was done that way. That's because we really couldn't see inside the brain until quite recently--that's another exponential progression. We now have brain scanners that can actually see inside a living brain at the level of individual synapses and interneural connections and can see the neurotransmitters and...see new spines being created as we think our thoughts--so we can see not only our brain create our thoughts but our thoughts create our brain.

Ultimately non-biological intelligence will be much more powerful than biological human intelligence, but it's not an invasion of intelligent machines from Mars--it's coming from our own civilization.

We are able now to actually turn this data into working simulations of brain regions--there are two dozen brain regions that have been modeled and simulated...and as we simulate these regions we are learning how the brain produces this intelligence and there's a lot to be learned there. The best example of human intelligence we have is the human brain and as we learn its methods we can add that to our toolkit. It doesn't mean we're going to just copy exactly how a human brain works. We're going to basically apply those principles. That's what engineering does well. As engineering learns scientific principles it can magnify and focus on those principles and dramatically increase their effects.

Is too much technology--and the sheer volume of accessible information--ruining our ability to concentrate?
Kurzweil: Not at all. This old controversy goes back to kids using calculators, not learning arithmetic. But if you don't have to bother with the mechanics of arithmetic you need to think more about the abstractions of how to solve a problem. And the fact that we can access knowledge and automate some of the more mechanical aspects of thinking allows us to think more creatively and creative projects are getting done more rapidly, so we are increasing human creativity with these tools. There's also the phenomena of the wisdom of crowds which the Internet is able to harness. The blogosphere for example--an individual blog may be unreliable but the whole blogosphere is able to uncover the truth about issues much more rapidly...so a crowd can be much wiser than any of the individuals--it's kinda the opposite end of the spectrum from the lynch mob where you have the lowest common denominator of intelligence. But decentralization tends to harness the wisdom of crowds rather than the wisdom of lynch mobs.

Are there any jobs computers/robots/AI could not eventually do better than humans?
Kurzweil: Ultimately artificial intelligence is going to be able to do everything humans do. (It) will operate at the best human levels and do so tirelessly but...there's in fact a larger number of jobs today than there was 100 years ago and they pay eight times as much in constant currency as a century ago and they're more complex and actually more satisfying--and we've also invested a lot more in education as a result. So these trends are going to continue, work is going to become more and more intellectual. I'd say that already half the population contributes to creating information or intellectual content of one kind or another--none of these jobs existed 50 years ago.

What downsides are there to advanced technologies?
Kurzweil: Technology is a double-edged sword, and the Internet will spread hate and allow destructive groups to organize...but I think the destructive side of the Internet is fairly subtle. An issue I'm more concerned about...is the abuse of biotechnology. I think it's going to be very powerful in terms of enabling us to overcome disease and aging and extend human longevity and health, but it could also be used by a bioterrorist today to reprogram a biological virus to be more deadly or more communicative or more stealthy and so some people have called for a relinquishing of (biotech and other advanced technologies like nanotechnology and AI) because they are too dangerous.

In my view, relinquishing these technologies is a bad idea for three reasons: one it would deprive us of these proponed benefits and there's still a lot of suffering in the world that we need to overcome. Secondly it would require totalitarian government to implement a ban. And thirdly it wouldn't work, and I think that's really the key point--we'd just drive these technologies underground where they would be even more dangerous, more out of control. So my view is the correct response is twofold: one, ethical standards to prevent accidental problems by responsible practitioners...and secondly developing a rapid response system that can deal with people who don't follow the guidelines, who are trying to be destructive like terrorists. The good news is we now have the tools to do that. We can now sequence a biological virus in one day.

Natasha Lomas of Silicon.com reported from London.