X

Flaw finders go their own way

Security researchers rebut the idea that "responsible" flaw disclosure means playing ball with software makers.

Robert Lemos Staff Writer, CNET News.com
Robert Lemos
covers viruses, worms and other security threats.
Robert Lemos
7 min read
To many software makers and security consultants, flaw finder David Aitel is irresponsible.

The 20-something founder of vulnerability assessment company Immunity hunts down security problems in widely used software products. But unlike an increasing number of researchers, he does not share his findings with the makers of the programs he examines.

Last week, Immunity published an advisory highlighting four security holes in Apple Computer's Mac OS X--vulnerabilities that the security company had known about for seven months but had kept to itself and its customers instead of disclosing the problem to Apple.

News.context

What's new:
Despite pressure from Microsoft and other companies about the dissemination of security alerts, independent researchers are sticking to their own approach to flaw disclosure.

Bottom line:
The debate about when and how to inform people about security risks is causing fractures in the industry.

More stories on this topic

"I don't believe that anyone has an obligation to do quality control for another company," Aitel said. "If you find out some information, we believe you should be able to use that information as you wish."

Despite efforts from Microsoft and other companies to direct how and when security alerts are sent out, independent researchers like Aitel are sticking to their own vision of flaw disclosure.

For them, software companies have become too comfortable in dealing with vulnerabilities--a situation that has resulted in longer times between the discovery of security holes and the release of patches.

At the heart of the issue is the software industry push for "responsible" disclosure, which calls on researchers to delay the announcement of security holes so that manufacturers have time to patch them. That way, people who use flawed products are protected from attack, the argument goes. But the approach also has benefits for software makers, a security expert pointed out.

"As long as the public doesn't know the flaws are there, why spend the money to fix them quickly?" said Bruce Schneier, chief technology officer at Counterpane Internet Security, a network monitoring company. "Only full disclosure keeps the vendors honest."

The debate over how open the discussion of flaws should be is not a new one. The locksmith community has been talking over the issue for more than a century and a half, and it still has failed to find consensus.

Matt Blaze, a computer science professor at the University of Pennsylvania, has seen firsthand the ire that the issue can raise. Blaze has studied how security threats in the logical world compare to problems with physical locks in the real world. His papers have revealed weaknesses in locks that some professional locksmiths would have liked to keep secret.

""We, as professionals in the security field, are outraged and concerned with the damage that the spread of this sensitive information will cause to security and to our profession," a person claiming to be a retired locksmith wrote in a bulletin board posting about Blaze's work.

That reaction is nothing new, Blaze found. Locksmiths have always been close-mouthed about the weaknesses of locks and, as far back as the mid-19th century, an inventor of mechanical locks found it necessary to defend himself when he published details of such flaws.

"Rogues knew a good deal about lock picking long before locksmiths discussed it among themselves, as they have lately done," Alfred C. Hobbs wrote in a book published in 1853, according to Blaze's site. The author also wrote:

"If a lock, let it have been made in whatever country or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance."

In the past, many hackers and security researchers outed glitches without much thought of the impact on Internet users. Microsoft, among others, changed this. As part of its 3-year-old "Trustworthy Computing" initiative to tame security problems in its software, the company began an outreach program to support the work of the security community. At the same time, it started chastising those researchers who, it believed, released details of flaws too early.

Balance of power?
The result is a tradeoff between security researchers and software businesses that is supposed to benefit product users.

Apple, for example, keeps the work of its security team wrapped in secrecy and issues patches approximately every month. Microsoft has moved to a strict second-Tuesday-of-each-month patch-release schedule, unless a flaw arises that poses a critical threat to customers' systems. Database maker Oracle has settled on a quarterly schedule.

"We think it is in the best interest of our customers," said Kevin Kean, director of Microsoft's security response center. "A large portion of the research community agrees with us and works with us in a responsible way."

But some security researchers believe the tradeoff is benefiting companies too much, as it allows them to tweak their patching processes at their convenience, and without the need to introduce fixes disturbing the progress of software development. That adds up to a lax attitude to security, some experts believe.

For example, eEye Digital Security abides by Microsoft's responsible disclosure guidelines, but posts the length of time since it reported a vulnerability to the software giant on a special page on its Web site. The top-rated flaw on the company's Web site was first reported to Microsoft almost six months ago.

The detente also makes manufacturers look good in terms of the lag between the public warning of a flaw and the release of a patch. For example, a year-old study by Forrester Research gave a nod to Microsoft

for minimizing the window of vulnerability, compared with most Linux distributions. It's a direct side-effect of the software giant's ability to convince security researchers to play ball, despite expectations.

"The general consensus in the developer community is that one would like to help the open-source projects rather than to torpedo them," said Laura Koetzle, vice president and research director of Forrester Research and the author of the report. "Whereas the temptation with a large faceless company is to disclose early and hurt them."

The dispute over disclosure goes to the heart of an old question: Is it responsible to give details of a threat, if the warning puts even more people in danger?

Those concerns drove a discussion on the mailing list for the kernel of Linux last week. A suggestion that a contact point be created to focus on security issues in the kernel, or core of the open-source operating system, immediately blossomed into a debate about whether that list should be private or public.

In addition, the debate centered on the question of whether the vendor-centric security list, Vendor-Sec, takes too much time to fix important flaws.

"It should be very clear that no entity...can require silence or ask anything more than 'Let's find the right solution,'" Linus Torvalds, the original creator of Linux, said in the discussion. "Otherwise, it just becomes politics."

In general, though, the open-source world, which has to deal with public development models, has largely learned to embrace security researchers.

"If we get a report from the outside, it is up to the one who finds the vulnerability to decide what happens to it," said Roman Drahtmueller, head of security for SuSE Linux, Novell's version of the operating system.

Microsoft, however, would rather work in secrecy with flaw finders to help prepare a fix. With the public spotlight on its security glitches and with hundreds of millions of users relying on its products, the software giant is very systematic in its approach to patching.

"It is best for customers, because we have a chance to provide updates before a large segment of the black hat community gets to make use of the vulnerability," said Microsoft's Kean.

Flaw finders who do not play by the rules don't get credit in Microsoft's security bulletins and are rebuked in press releases, among other sanctions.

"Microsoft is concerned that this new vulnerability in (product is named) was not disclosed responsibly to Microsoft, potentially putting computer users at risk," the software maker has typically written in e-mailed statements about vulnerability disclosures.

Despite the efforts of Microsoft and others, many researchers still don't feel that the companies take their findings seriously. While some security software sellers have lauded Apple for its response to vulnerability discoveries, an independent researcher gave the company a thumb's down.

"It's really been like pulling teeth dealing with them over the years," said the researcher, who asked not to be identified. "I know a lot of folks that have found vulnerabilities in their stuff that pretty much refuse to deal with them."

Even if security researchers play ball with software makers and hold off on making vulnerabilities public, that might only engender a false sense of security, said flaw finder Aitel. He said that a small, but significant, number of malicious programmers could discover such security holes independently and abuse them.

"We don't feel that we are finding things that are unknown to everyone else," he said. "I am not special because I can run a debugger. Others can find--and use--these flaws."