X

Net security: An oxymoron

SRI International scientist Peter Neumann cautions that even with new vulnerability and threats to the U.S.'s information infrastructure, a comprehensive response remains far away.

Paul Festa Staff Writer, CNET News.com
Paul Festa
covers browser development and Web standards.
Paul Festa
14 min read
When it comes to computer security, Peter Neumann has a clear message to both governments and mass-market software makers: Get back to work.

At a time when threats to the Internet and other computer networks loom from teenage hackers and terrorists alike, Neumann (pronounced "Noy-muhn") is sounding an alarm that computer security advocates agree has fallen on deaf ears. The trouble, Neumann warns, is that the Internet is populated by computers that were not designed with network security in mind. As a result, security is addressed on a patch-by-patch basis, but an effective solution would require redesigning systems from scratch.

Neumann, principal scientist at SRI International's Computer Science Laboratory, has tackled topics as diverse as the security of commercial aviation and electronic voting and the potential of open-source programming to produce secure software. In research papers and testimony, Neumann has made his case before the U.S. Congress, various arms of the Defense Department, NATO, the IRS, the General Accounting Office, the U.S. vice president and the National Science Foundation.

Since launching his career in 1953, programming the IBM Card-Programmed Calculator for the U.S. Naval Ordnance Lab, Neumann has garnered awards and fellowships from an alphabet soup of computer security and technology organizations, including the AAAS, ACM, EFF and IEEE. In 1997 he won the Norbert Wiener Award for excellence in promoting socially responsible use of computing technology, and this year, the 30th anniversary of his tenure with SRI, he won the nonprofit R&D Institute's annual fellowship.

Neumann spoke to CNET News.com about the state of computer security as it relates to the current terrorism crisis, among other subjects--and about his youthful encounter with Albert Einstein.

Q: How have your job and your concerns over computer security changed in light of the Sept. 11 terrorist attacks?
A: If you look back at my testimony for the House and Senate, you'll see that I have been stating for quite a few years with regard to the security of our information infrastructure that things have been getting worse each year relative to the massively increased use of the Internet and computers in everything that we do, The standard answer before Sept. 11 was that we've never had the Pearl Harbor of computer security on the Internet, therefore why do we need to protect it? What's changed is the awareness that essentially everything is at risk. including all of our critical infrastructure. And considering the increased vulnerability and increased threats, I have come to the conclusion that we are going backwards. Nothing I've been saying has changed, but perhaps the public awareness of the problems has changed.

My point is that we respond to specific threats without thinking about everything else, but we need to start thinking about the deeper threats, the other threats, the low-tech attacks. At the moment it is relatively easy to bring down the Internet, for example, worldwide. There are a variety of ways of doing that. The consensus of the security community is that it's relatively easy to cause significant disruption to the Internet. The standard answer before Sept. 11 was that we've never had the Pearl Harbor of computer security on the Internet, therefore why do we need to protect it? What's changed is the awareness that essentially everything is at risk.

On a practical level, how do we go about doing that? What's the main problem?
The problem is that mass-market computer systems are not inherently capable of resisting many of the types of threats that have been known to exist for the past 30 years.

Microsoft in particular has come under heavy criticism for the security of its software. How do you think it's doing?
I think they're improving, slowly, but they're starting from a situation in which desktop systems were never intended to be linked together. And networking inherently introduces all kinds of new threats.

How much of your work is directly involved in defense?
Back in 1972 I had an ARPA contract to study very reliable fault tolerant systems. In 1973 I had a 10-year project for the Department of Defense on designing an extremely secure system, significant aspects of which have found their way into the very high-end commercial marketplace. There's a report on my Web site for the Army Research Lab on how to design secure systems. This is not an easy problem. The challenge is to build systems with very few if any weak links. That is apparently a lost art. However, all the research I did back in the '50s in my doctoral dissertation was aimed at highly survivable communications systems. What makes me unhappy is that some of the best research results have not found their way into the mass marketplace.

If you had the president's ear right now, what would be the most important things you'd want to convey to him about computer security and the terrorist threat?
Adequately addressing the computer security problem is absolutely essential to the well-being of our nation and other nations as well. Focusing on the computer security issue in light of what else is going on has got to be done in the balance of all of the threats. And the key conclusion is that there's no one set of solutions that's going to solve our problems.

Nevertheless, the computer security system is absolutely critical. It is one of the problems we have to address. Better support is needed for serious research in developing robust computers and networks. Along with that goes education, which is also inadequate at the college and graduate-school level. We're not teaching students to have a strong system sense.

What's the status of the debate over key escrow?
(Editor's note: Key escrow systems let a third party, such as the government or a designated organization, hold a key that can be used to decrypt communications.)

The National Research Council study and various other organizations have pointed out that when you have trapdoors in systems or cryptography, those trapdoors can be misused by insiders and by outsiders. And so you inherently weaken the system rather than strengthen it by putting in trapdoors.

How would the debate have played out if it had taken place in the current wartime context?
It is taking place in the current so-called wartime context. And it's not over. This is one that's going to keep coming up. Law enforcement and intelligence have their view of what they need, and many of the privacy issues are being largely ignored.

What do you say to the law enforcement argument that the absence of a key escrow scheme hinders our ability to fight terrorism?
The simplest answer is that terrorists have not been using very advanced techniques. The problem here is that you run into an escalating spiral: Every time you raise the bar with technology, the attackers can raise the bar with anti-technology, or even technology itself.

Can you give an example?
The best cryptography in the world can be trivially subverted by some of the vulnerabilities we would have in the computer systems. You can capture the unencrypted information from inside or outside. In that sense cryptography doesn't solve the problem if it's poorly embedded in your systems. Second, high-tech solutions in themselves involve a lot of people and a lot of interpretation because a lot of the techniques are not black and white, and may generate a large number of false-positives.

So you ultimately rely on people, and as we've seen in the airline business, hiring large numbers of people at minimum wage who are not properly trained is not very effective. Third, we over-endow technology. We like to believe that technology will solve our problems. Many of the problems that we're facing are in the international and geopolitical realm. They're quasi-religious, and are in some sense not amenable to technological fixes. We always look for easy answers. And there are no easy answers.

What's going on right now at SRI that promises to have the most immediate practical application to the war effort?
There's a lot of stuff in biotechnology that has potential promise: bioengineering, biocomputing. There's considerable work on security and cryptography that also has enormous promise. We're doing a lot of work on software engineering and systems architecture and security and reliability and safety of systems...There are enormous promises in computer technology, in biotech and bioengineering. But the key is the sense of the large system, of looking at the entire enterprise as a system that is inherently vulnerable, trying to identify those vulnerabilities, and doing something appropriate according to the known threat and the unanticipated threats. It's about out-of-the-box thinking--expecting the unexpected--and learning to design systems...that are able to address the full set of threats.

You mentioned that you're leading one of DARPA's Composable High-Assurance Trustworthy Systems (CHATS) programs with an emphasis on trustworthy open-source operating systems. What does CHATS consist of?
(Editor's note: DARPA is the Defense Advanced Research Projects Agency, the central research and development organization for the U.S. Department of Defense.)

CHATS has 12 contracts of widely diverse approaches. The project is only 4 months old. It's trying to radically transform some of the assumptions people are making about computer systems, like the notion that putting all your eggs in the basket of one proprietary systems vendor is the way to go. The CHATS effort is trying to find a suitable alternative to that assumption.

How do you respond to the argument that open-source systems are by their nature insecure because their source code is exposed?
If you have a weak system, it can be compromised without knowledge of the source code. If you look at the history of security flaws, most of them are found without any knowledge of the source code. The idea of putting your head in the sand and pretending that your system is secure when it isn't has been demonstrated to be utterly ridiculous. It's what's known as "security by obscurity," and it has serious defects.

Every time you raise the bar with technology, the attackers can raise the bar with anti-technology, or even technology itself. The open-source movement is not inherently guaranteed to come up with secure software unless there is significant discipline in the development, distribution, operation and administration of the resulting systems. So it's important to realize that we have a lot of weak links, all of which have to be addressed. The idea that hiding the source code is going to solve the problem is utterly ridiculous.

Couldn't it be argued that more vulnerabilities are exposed by revealing the source code?
Revealing the source code of a system that is inherently insecure does make it easier to break the system. There are analysis tools readily available on the Internet that can be applied to find those vulnerabilities. But if the developers aren't using those analysis tools to get rid of the vulnerabilities in the first place, we're in a very bad situation.

You've identified a double-edged sword: If your system is lousy to begin with, you're in trouble. If it's a good system, you should be able to distribute the source code and let everyone in the world who wants to try to break it. If they can't break it, that gives you some assurance that you've got something good.

The classical example is cryptography itself, where you publish the algorithm, the software is widely available, and the research community and the intelligence community attempt to break it. If they can't break it, that gives you some assurance that it's stronger than it would be otherwise.

What were your predictions for Y2K, and were you surprised at what happened?
I was--and still am--on the General Accounting Office's executive council, and we were deeply involved with Y2K. There were organizations like the Social Security Agency that recognized the problem 10 years ahead and radically altered their computer systems accordingly. There were many systems that were not fixed until the last few months of 1999. Had there not been the tremendous media hype, we probably would have had some major disruptions. As it was, there were a bunch of relatively minor effects, and some of them were still happening in 2001--aftereffects of patches that were put on systems that failed in January. All in all, I think the effort was relatively successful.

Tell me about the Risks Forum, and what the hot topics are these days.
In some sense, it's more of the same: We keep getting things that are problems that haven't been solved in the past and keep recurring. In other cases, it reflects a considerably heightened awareness of the risks themselves. A new class of wireless attacks was the lead item in the most recent issue. Networking and wireless pose tremendous problems. There are continual flaws in mass-market software. The typical Microsoft attitude is that if you install the hundreds of patches, the system will be secure. The fact that you have to install hundreds of patches makes you wonder about the quality of the software.

In April 2000 you addressed a NATO conference on "The Ruthless Pursuit of COTS." Can you briefly describe your findings?
COTS--commercial off the shelf. That was a meeting ostensibly trying to see how you could take off-the-shelf software and plug it into an environment that was mission critical and still have the system be robust. My conclusion was that this was the wrong solution to the wrong problem. The correct solution is to have robust solutions to begin with. So that was the beginning of anticipating the CHATS program, of asking how do we structure systems in such a way that we get what we need. My conclusions are that we need to find better ways of doing it than plugging in COTS programs that aren't robust. One of the conclusions is that the open-source program could be a wonderful driving force on the commercial developers. That's already happening with the network routers and servers, which are already over 50 percent nonproprietary products--Linux and various BSD systems. That's a foot in the door for the open-source movement.

What's your opinion of computerized voting machines? Internet voting?
That's a real stinker. Back a decade ago, New York City was trying to upgrade from the old lever machines to fully electronic systems. And one of the problems with all the fully electronic systems is that they have absolutely no accountability that is worthy. There's no evidence that your vote as cast is actually your vote as counted. And after spending many dollars, the city of New York decided they weren't ready to go to fully electronic systems. And they're still using lever machines, despite the difficulty in setting them up and toting them around.

The concept of the all-electronic voting machine, or Internet voting, which is even worse, is inherently flawed as long as there's no assurance that your vote as cast goes in correctly. The problem is that the vendors of systems have still not bitten the bullet in terms of providing that assurance. This is the paradigmatic computer security situation. You want privacy of your voting and the secrecy of your vote, and you want the integrity of the software and the accountability of the process. And in some sense, as the thesis of (computer scientist) Rebecca Mercuri demonstrates, privacy and accountability are inherently antagonistic. So in the aftermath of the Florida election, we see sort of a feeding frenzy of belief that technology will be able to solve the problem. This is clearly not the case. Once again, there are many people involved in the system, and many vulnerabilities.

You recently started People For Internet Responsibility (PFIR). What is it? What is it up to?
PFIR is at the moment still in its infancy. PFIR.org has a number of position papers on various issues critical to the sane and sound use of the Internet. And it's a complicated business, but we're trying to bring some sanity into the whole picture, in the way in which these problems are understood. This is about security, privacy, disenfranchisement of non-Internet people; it's transnational, recognizing the international nature of the Internet. Some of it is about Internet governance and how difficult that is.

You're on the board of EPIC (the Electronic Privacy Information Center). How do you see that group's role in the war, with respect to law enforcement and the fight against terrorism generally?
I think EPIC has a truly consistent view of preserving privacy. And they're doing a heroic effort in trying to keep these issues in the eye of the public and the eyes of Congress and the government. The problem is that the issues are not black and white, and I think EPIC is serving a tremendously useful role in pointing out the shadings of the issues.

There are no easy answers to the privacy problems, and in times of crisis, there's this difficult problem of maintaining some sense of privacy and yet allowing the government to do what it needs to do. The key here is that EPIC has been trying to promote openness in government, and this is not something that government particularly likes. Again we're back to security by obscurity, not in technology but in terms of what's going on within the government itself.

Is there anything in the Senate's recent anti-terrorism bill that concerns you?
In times of crisis, it's necessary to have discussion of critical issues. I think the process at the moment is trying to finesse all discussion. And yet there are many issues that are desperately in need of national thought. Even the definition of terrorist itself has been very poorly represented. Somebody who is attempting to reverse-engineer an inherently unsecure computer system in order to improve it in a research environment could be labeled a terrorist. The terrorism bill, in absence of a meaningful definition of what can be defined as terrorism, can be used as an attack on anyone doing research on computers and security.

I'm also concerned about the UCITA (Unified Computer Information Transactions Act) legislation, which is equally regressive, which makes reverse engineering illegal, even in attempting to remove vulnerabilities. It has passed in Maryland and Virginia, and makes it possible for the system developer to sue you if you write a nasty review that is critical of their software. We're getting shotgun legislation that is seriously overshooting the mark. There needs to be more public awareness of this.

You've written about threats to the air traffic control system and commercial aviation generally. How did the hijackings of Sept. 11 change your perspective on this problem?
If you look at my testimony to the Gore Commission Conference on Aviation Safety and Security in 1997, I pointed out that Alex Blumenstiel at the Department of Transportation has been writing reports on risks in aviation for the past 18 years. Most of those reports have apparently been ignored. So if we go back and look at everything he's written, he's basically anticipated everything that's happened and a lot more. So somebody's asleep at the wheel.

Tell me about your encounter with Albert Einstein.
That was a wonderful thing. My mother was approached by Einstein's stepdaughter Margot, and Margot asked my mother whether she would teach her how to make mosaics. And my mother was a fairly well-known mosaicist in this country. She did a magnificent portrait of Einstein, which I donated to the Boston University library, which is the site of the Einstein papers project.

In November of 1952 I was singing in a concert in Princeton and my mother said, "Please call (Einstein's secretary) Mrs. Dukas and maybe you can see him for five minutes." Mrs. Dukas said, "Oh, he adores your mother. And please come by for breakfast." So we had a wonderful two-hour discussion over breakfast about complexity in mathematics, physics and music. He discussed his notion that everything should be as simple as possible but no simpler. This was a discussion of the risks of complexity, which has been part of my research ever since.