X

Microsoft's blind spot

Sun Microsystems Chief Scientist Bill Joy says an ambitious security push outlined by Bill Gates last month is likely to fail because of fundamental flaws in the security design of Microsoft applications.

4 min read
Over the years, many in the computer industry have found it all too easy to ignore security. It usually doesn't show up in product demos.

Microsoft, in particular, has repeatedly plunged forward with a seductively simple yet dangerously powerful idea. In academia it's called "procedural attachment"--letting a program appear in place of data. Why do this? In a nutshell, programs are more versatile than data.

So Microsoft built ActiveX, a technique within Windows for automatically downloading and executing arbitrary programs. And Microsoft put macros into its word processor, along with a technique for automatically executing a macro as soon as a document is opened. And Microsoft made it easy for an e-mail script to do almost anything.

But the company didn't worry about security, and guess what? One of the ways in which programs are more powerful than data is that they can be designed to replicate. That's the basic principle behind the computer virus. A Word macro can save itself to other files. An e-mail script can re-mail itself to everyone in your address book.

Microsoft originally built its operating system and applications for the single-user desktop. When the company finally took notice of networking, its programmers designed applications for the isolated office environment, where all the computers are assumed to belong to friendly colleagues, not adversaries. But when the Internet exploded, Microsoft seemed ill-prepared to retrofit adequate security into its shaky software base.

But when the Internet exploded, Microsoft seemed ill-prepared to retrofit adequate security into its shaky software base. Now we have the proof of that: Microsoft Chairman Bill Gates has issued a directive that, at long last, security shall be more important than getting the next release out the door. Microsoft's system programmers will spend the month of February getting training in computer security. I think they'll find they have a long road ahead of them.

Now, to be fair, Microsoft has hired a lot of smart people. Many of them are friends and colleagues whom I greatly respect, and some of them do already know some things about security. But we're talking about the behavior of Microsoft the company, looking at the products actually coming out the door. Experience has revealed numerous security holes in those products and has revealed that Microsoft has repeatedly made the same kinds of simple security blunders.

Often a computer virus propagates by exploiting a simple programming error, typically a C program failing to check for buffer overflow, allowing input data to overwrite arbitrary portions of memory. Last year's Code Red worm exploited exactly such a bug in Microsoft IIS.

Such a programming error can also occur in a Java program--you can't prevent programmers from making mistakes--but it can't have that sort of disastrous consequence, because the Java Virtual Machine checks for such errors at run time and prevents the overwriting of memory outside the buffer.

But a virus can also propagate because of a fundamental flaw in the security design for an application. An example is the case of Internet Explorer trusting the declared MIME type of an attachment rather than examining the attachment itself. Another is the recent case of Internet Explorer allowing a script loaded from one site to surreptitiously access local files or other sites. Netscape's JavaScript was designed to prevent this through its "Same Origin" security policy, but Internet Explorer's JScript technology, which nominally supports the same scripting language, fails to implement the Same Origin policy.

Adding security to an existing, large insecure system will, in my judgment, prove an impossible task. Microsoft has taken note of Java's success and responded with a language of its own called C#. It's been said that imitation is the sincerest form of flattery, and one only has to look at the many ways in which the form of the C# specification echoes that of the Java Language Specification to understand the extent of the homage. But C# tries to encompass all the power of C as well as features borrowed from Java. And security cannot be added to an otherwise insecure language.

Section 25 of the C# specification says (I quote verbatim): "C# provides the ability to write unsafe code. In unsafe code it is possible to declare and operate on pointers, to perform conversions between pointers and integral types, to take the address of variables, and so forth."

In a sense, writing unsafe code is much like writing C code within a C# program.

"Unsafe code is in fact a 'safe' feature," the C# specification continues, "from the perspective of both developers and users. Unsafe code must be clearly marked with the modifier 'unsafe,' so developers can't possibly use unsafe features accidentally, and the execution engine works to ensure that unsafe code cannot be executed in an untrusted environment."

Did they get their design right this time? I, for one, would bet against it. C# is already cast in stone as an ECMA standard. And only now has Microsoft decided to make security a priority.

Adding security to an existing, large insecure system will, in my judgment, prove an impossible task.