Visions of doomsday: One year later

Bill Joy revisits his now famous clarion call, warning about the danger posed by unchecked technologies.

8 min read
Some critics dismissed him as a high-tech Cassandra. But Bill Joy isn't holding any grudges.

After all, a widely discussed essay he published in the pages of Wired magazine last April got the attention he hoped it would, sparking a charged--and continuing--debate about the challenges posed to mankind by new technologies.

And now with the benefit of hindsight, Joy, the chief scientist at Sun Microsystems, sees no reason to take back what he wrote. If anything, after a year's worth of attacks by self-replicating computer viruses--not to mention sundry events including new outbreaks of "mad cow disease" and ethical angst triggered by research into genetic engineering--Joy's words perhaps carry more punch than ever.

Joy recently sat down with CNET Radio Executive Editor Steve Kovsky to take stock of the industry's reaction to his warning and what's being done to develop a framework for the management and self-policing of new--and potentially deadly--technologies.

Q: Bill, it has been almost exactly one year since the publication of your (Wired magazine) essay, which really was intended to open a lot of people's eyes. If you could recap what your fears are, and what prompted you to write this essay?
A: I became concerned about the fact that a number of sciences were becoming information sciences. People haven't been able to create things that were potentially destructive inside of a computer like this before. As an information technologist, understanding the difficulties of controlling information, I realized both a great potential--and also the great potential harm--of these sciences would be very widely available. I felt it necessary to look at the risks inherent in having things be information sciences rather than traditional laboratory sciences.

You make a lot of very interesting points. But for people who didn't necessarily read your arguments, it was fairly easy for them to dismiss them. And for some to even say, "Bill Joy--he's a brilliant man, but he's gone off the deep end a little bit." Did you get that reaction from anybody personally?
Mostly from people who hadn't read the essay. I think that...in some sense I'm not sure what they're responding to at that point. But no, a large number of people have said, "Yes, these concerns are valid." The disagreement is mostly about what we should do about it. I don't find most people in denial of the danger because, in some sense, to admit the power of these new sciences and the related technologies is to admit that there's danger.

Drill down on a couple of these technologies. The ones that are of concern are robotics, genetics and nanotechnology. Could you discuss each one of them and at what point they cease being a boon to mankind and take on more of the character of a possible threat to mankind?
Well, I think the thing that they have in common is that designs in these technologies are basically information. So, for example, more and more we're seeing biology become an information science, and we're seeing material science, which is what nanotechnology is becoming--now an information science.

So today what we largely have is instrumentation in the world, like in the cyclotron or in the lab, where they're cutting apart the DNA. Then we have computers to analyze it. But with a little more understanding, you can do a simulation completely within the computer. At that point, you're doing what historically was laboratory science without the laboratory. And given access to sufficiently powerful computers, anybody can have a laboratory in their laptop.

Now, that is great because people can discover all sorts of wonderful things. But that also means that the potential for mischief or accident is no longer limited to those people who have a fairly large and expensive laboratory, but can occur within a computer itself.

So this is unprecedented, really. I mean, people haven't been able to create things that were potentially destructive inside of a computer like this before. And the things that are most dangerous are the things that, when let loose, replicate themselves in the environment.

One danger we fall into is thinking that it's like a Pandora's box thing where it's either in or out of the box. And that's not correct. So it's a combination of the fact that these new sciences are information sciences, and the fact that the things that you can make with these new sciences are potentially self-replicating, like machines that make more of themselves. The danger we have with something like that is man-made, and I think that danger is quite real.

Now, this danger could be the result of abuse or an accident?
If enough people are playing with it, there will be accidents. We can assume, say, people are mostly responsible. But what we can't assume is that there aren't crazy people. And I think this is the difficulty...If we put a sort of arbitrary amount of power in everybody's hands, then it'll fall into somebody's hands who is malicious. And it might even be the case that accidents are more likely, but I think the thing that ought to give us pause is, in fact, not just accidents, because we could use statistics on that. The thing that's very difficult to stop is the enabling of crazies or terrorists or something. And I think it's a real danger and one that the government has been particularly concerned about.

It's hard for many of us to imagine an individual who would have it within themselves to use this as a weapon against another human being. But in a sense, you kind of went eyeball-to-eyeball with the Unabomber, and you had reason to believe that you might have been in his sights at one time.
Yeah. One theory is that there were people who were written about in The New York Times and became targeted by him. And I certainly was written about in the Times for other work that I had done.

The thing about this technology is, it's just bits in a computer and you can manipulate it or buy it and do it without the need for a large laboratory--in a way that's essentially untraceable because there's no large pieces of equipment that are essential and are hard to get. Then someone like a Timothy McVeigh or a Ted Kaczynski would have access to these potentially self-replicating destructive things. It could be something that's dangerous to humans that's as contagious as hoof-and-mouth, for example--something that's been engineered in a lab so that there's no natural resistance.

This is not something that's a pleasant thing to talk about, but to deny that it's possible, I think, is foolish. I think the question, really, is if we accept that the technology is enabling these kinds of things as well, then we have to make a judgment as to what should we do that's sensible as a consequence of that possibility. We shouldn't just close our eyes and do the three monkeys thing with our hands.

The question becomes, How do we stop them? This is where one would have hoped that at the end of your essay, there was a silver bullet. It doesn't appear that there is.
No, it's in the nature of these technologies that I think they're more powerful on the offense than they provide power to the defense. I think it's much easier to build a new form of disease using genetic engineering ultimately than to build an immune system that will defend against it. So, for example, it's much easier to build a nuclear weapon than a nuclear weapon defense. I think it's much easier to build a new form of disease using genetic engineering ultimately than to build an immune system that will defend against it. And certainly, it's much easier than deploying a defense.

So this is not good news, and I wish I had a silver bullet. I think recognizing that there is this problem, what we need is the scientific institutions like, say, the International Organizations of Biologists, to take responsibility in the biological sciences. I know nanotechnologists have been thinking about these things and coming up with some proposed solution.

One danger we fall into is thinking that it's like a Pandora's box thing where it's either in or out of the box. And that's not correct. There's always risk in life--there's risk that we will be hit by an asteroid or something. But every little bit of risk is cumulative. And so what we need to do is to do some big things that make sense, but also realize that small things that reduce the danger of abuse of these technologies make sense also. So it's a kind of a thing that won't be finished with one single action.

But the action that needs to be taken in many of these cases is simply to stop, simply to arrest that research, that progress--and move away.
I don't know...How easy would we want to make it for people to build nuclear devices? Is it to our advantage to make them smaller and smaller and easier and easier to manufacture? I mean, at some point, it doesn't make any sense. There are probably certain kinds of information that we wouldn't want everyone to have. Do you want everyone to know how to make smallpox? You can just imagine how deadly a smallpox outbreak would be. How widespread would we want the smallpox virus to be? And if we don't want the virus to be widespread, then we certainly don't want some information that's equivalent to the virus to be widespread.

Now, if we can't control it, then we've got a problem. But in either case we ought to say, "Yeah, this is a problem," and do the best we can at managing the situation as opposed to denying it.

Are you confident that we can manage the problem, or are the people who we most fear this falling into their hands--are they also the people who are least likely to take a Hippocratic oath for scientists?
Yeah, exactly. Well, you know, we do have some real problems. I don't think we know what to do with people using genetic engineering against our crops. They have biological tests. No, I don't think we can get the risk to be zero. But I think understanding the nature of the information age is to understand that as these sciences become information sciences, the risk goes higher; and that we may have to manage information in a way that we have only managed materials before, certain dangerous materials.

Now, people will say we can't do that. But if we don't do it, we've got a big problem. And so I don't think we should give up so easily on trying to institute some sensible management and self-policing. I think a scientific organization that has a good code of conduct can police its own behavior to a large extent--certainly to a large enough extent--to reduce the risk.