The 20-something founder of vulnerability assessment company Immunity hunts down security problems in widely used software products. But unlike an increasing number of researchers, he does not share his findings with the makers of the programs he examines.
Last week, Immunity published an advisory highlighting four security holes in Apple Computer's Mac OS X--vulnerabilities that the security company had known about for seven months but had kept to itself and its customers instead of disclosing the problem to Apple.
Despite pressure from Microsoft and other companies about the dissemination of security alerts, independent researchers are sticking to their own approach to flaw disclosure.
The debate about when and how to inform people about security risks is causing fractures in the industry.
"I don't believe that anyone has an obligation to do quality control for another company," Aitel said. "If you find out some information, we believe you should be able to use that information as you wish."
Despite efforts from Microsoft and other companies to direct how and when security alerts are sent out, independent researchers like Aitel are sticking to theirof flaw disclosure.
For them, software companies have become too comfortable in dealing with vulnerabilities--a situation that has resulted in longer times between the discovery of security holes and the release of patches.
At the heart of the issue is the software industry push for "responsible" disclosure, which calls on researchers to delay the announcement of security holes so that manufacturers have time to patch them. That way, people who use flawed products are protected from attack, the argument goes. But the approach also has benefits for software makers, a security expert pointed out.
"As long as the public doesn't know the flaws are there, why spend the money to fix them quickly?" said Bruce Schneier, chief technology officer at Counterpane Internet Security, a network monitoring company. "Only full disclosure keeps the vendors honest."
The debate over how open the discussion of flaws should be is not a new one. The locksmith community has been talking over the issue for more than a century and a half, and it still has failed to find consensus.
Matt Blaze, a computer science professor at the University of Pennsylvania, has seen firsthand the ire that the issue can raise. Blaze has studied how security threats in the logical world compare to problems with physical locks in the real world. His papers have revealed weaknesses in locks that some professional locksmiths would have liked to keep secret.
""We, as professionals in the security field, are outraged and concerned with the damage that the spread of this sensitive information will cause to security and to our profession," a person claiming to be a retired locksmith wrote in a bulletin board posting about Blaze's work.
That reaction is nothing new, Blaze found. Locksmiths have always been close-mouthed about the weaknesses of locks and, as far back as the mid-19th century, an inventor of mechanical locks found it necessary to defend himself when he published details of such flaws.
"Rogues knew a good deal about lock picking long before locksmiths discussed it among themselves, as they have lately done," Alfred C. Hobbs wrote in a book published in 1853, according to Blaze's site. The author also wrote:
"If a lock, let it have been made in whatever country or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance."
In the past, many hackers and security researchers outed glitches without much thought of the impact on Internet users. Microsoft, among others, changed this. As part of its 3-year-old "Trustworthy Computing" initiative to tame security problems in its software, the company began an outreach program to support the work of the security community. At the same time, it started chastising those researchers who, it believed, released details of flaws too early.
Balance of power?
The result is a that is supposed to benefit product users.
Apple, for example, keeps the work of its security team wrapped in secrecy and issues patches approximately every month. Microsoft has moved to a strict second-Tuesday-of-each-month patch-release schedule, unless a flaw arises that poses a critical threat to customers' systems. Database maker Oracle has settled on a quarterly schedule.
"We think it is in the best interest of our customers," said Kevin Kean, director of Microsoft's security response center. "A large portion of the research community agrees with us and works with us in a responsible way."
But some security researchers believe the tradeoff is benefiting companies too much, as it allows them to tweak their patching processes at their convenience, and without the need to introduce fixes disturbing the progress of software development. That adds up to a lax attitude to security, some experts believe.
For example, eEye Digital Security abides by Microsoft's responsible disclosure guidelines, but posts the length of time since it reported a vulnerability to the software giant on a special page on its Web site. The top-rated flaw on the company's Web site was first reported to Microsoft almost six months ago.
The detente also makes manufacturers look good in terms of the lag between the public warning of a flaw and the release of a patch. For example, a year-old study by Forrester Research gave a nod to Microsoft