Study: Equal security in all software

A Cambridge University computer scientist argues that, in the ideal case, proprietary programs should be as secure as those developed under the open-source model.

Robert Lemos Staff Writer, CNET News.com
Robert Lemos
covers viruses, worms and other security threats.
Robert Lemos
3 min read
Proprietary programs should mathematically be as secure as those developed under the open-source model, a Cambridge University researcher argued in a paper presented Thursday at a technical conference in Toulouse, France.

In his paper, computer scientist Ross Anderson used an analysis that equates finding software bugs to testing programs for the mean time before failure, a measure of quality frequently used by manufacturers. Under the analysis, Anderson found that his ideal open-source programs were as secure as the closed-source programs.

"Other things being equal, we expect that open and closed systems will exhibit similar growth in reliability and in security assurance," Anderson wrote in his paper.

The decision to adopt a closed-source policy is typically driven by other motivations, such as foiling competition or protecting the reputation of the developer by limiting information about flaws, he said.

The research is unlikely to quell the long-running debate between proponents of open-source software and corporations that believe closed-source software is better. While providing ammunition for each side's arguments, the paper also undermines each coalition. Supporters in the Linux community have maintained that open-source programs are more secure, while Microsoft's senior vice president for Windows, Jim Allchin, argued in court that opening up Windows code would undermine security.

"The more creators of viruses know about how antivirus mechanisms in Windows operating systems work, the easier it will be to create viruses or disable or destroy those mechanisms," Allchin testified in May.

Anderson rebuts those types of arguments in his paper.

Idealizing the problem, the researcher defines open-source programs as software in which the bugs are easy to find and closed-source programs as software where the bugs are harder to find. By calculating the average time before a program will fail in each case, he asserts that in the abstract case, both types of programs have the same security.

However, the paper has yet to be peer-reviewed, and errors in his assumptions could undermine his theory. Furthermore, he acknowledged that real-world considerations could easily skew his conclusions.

"Even though open and closed systems are equally secure in an ideal world, the world is not ideal, and is often adversarial," Anderson said.

For example, the same quality that makes it easier to find bugs in open-source code may also make it easier for attackers to find ways to exploit the code. On the other hand, software makers may be slower to assign resources to fixing flawed software and may not want to admit that such flaws exist for economic reasons.

Oddly, Anderson used the latter third of the paper to launch into a criticism of the Trusted Computer Platform Alliance, a security consortium started by Microsoft, Intel, Hewlett-Packard, Compaq Computer and IBM in October 1999.

While those companies claim that their focus is on security, it's really on creating a platform from which competitors can be excluded, he argued. Furthermore, the alliance's technology for assigning a computer a unique ID is really another plank that Hollywood and music companies can use to fence off their content.

"There are potentially serious issues for consumer choice and for the digital commons," he wrote.

Marc Varady, chairman of the TCPA, disagreed with Anderson's painting of the alliance as a way to control the content of the PC, calling it "a total farce." The alliance is merely providing a way to verify that a PC is trusted, he said.

"We have no interest in creating a system that is controlled and unique in a way that, if you don't follow these capabilities, you can't use it," Varady said.