X

Fuzzing browsers for fun

All software contains vulnerabilities, with some flaws worse than others. But should those flaws be made public after the vendor in question has been contacted?

Robert Vamosi Former Editor
As CNET's former resident security expert, Robert Vamosi has been interviewed on the BBC, CNN, MSNBC, and other outlets to share his knowledge about the latest online threats and to offer advice on personal and corporate security.
Robert Vamosi
4 min read

commentary All software contains vulnerabilities, with some flaws worse than others. But should those flaws be made public after the vendor in question has been contacted?

All software contains vulnerabilities, with some flaws worse than others. But should those flaws be made public after the vendor in question has been contacted? I say yes. So I applaud the security researcher who, earlier this week, declared that he'll post one Internet browser vulnerability daily throughout the month of July. If a software vendor can't respond quickly and either dismiss or patch a public flaw, then why should we continue to support that vendor? It should be an interesting month.

Good and bad
Software vendors can't possibly test their own creations for every conceivable use; they built the program and know how the app is supposed to work, so they're often blind to alternative uses. That's where third parties come in; they bring a fresh perspective, one that's outside the box that created the app. In a sense, I'm advocating open-source applications, because open-source apps benefit from having thousands of eyes view the code. But not everything can be open source; some software vendors need to make money, so the source code remains proprietary, hidden. That's where it all gets interesting: even if you can't see the code, you can observe it in action.

Security researchers are often on the vendors' side, reporting the vulnerabilities they observe in the hopes that the vendor will make the product stronger. Criminal hackers, on the other hand, only want to exploit the flaw and often release a Trojan or a virus instead of reporting the flaw. Both, however, spend hours observing a given app and trying to get it to fail. Not all software failures (crashes, reboots, and such) are exploitable. Like tea leaves, there's an art to reading software failures.

Fuzzing
The technique known as fuzzing creates fake data and is an accepted method of software testing. Last year iDefense gave a presentation on file format fuzzing at Black Hat Las Vegas, and already there are several more presentations lined up for this year's Black Hat Las Vegas. With fuzzing, you create a specific tool to look at a problem, for example buffer overflows, so that you can see where the application fails to validate input data. Again, sometimes the fake data merely crashes the app -- not a security risk. But other times, a malicious attacker could use the buffer overflow to rewrite program data and compromise your PC remotely. The trouble with creating a specific fuzzing tool is that you see only the problem you've already identified, such as buffer overflow issues. What about other kinds of errors?

A second kind of fuzzing technique uses a framework that is capable of generating different kinds of fake data to get the application to fail in different ways. Metasploit is a security tool that creates a framework of random, semi-valid data and allows researchers to observe the results. One of the creators of this tool, HD Moore, recently used this tool and others (Hamachi, CSSDIE, and DOM-Hanoi) on current Internet browsers, including Apple Safari, Mozilla Firefox, and of course, Microsoft Internet Explorer.

Go public?
But should we, the public, know about these flaws? I say yes. Apart from a few self-aggrandising researchers wanting to see their names in print, most security researchers go public out of sheer frustration. Microsoft has only recently started acknowledging in its security bulletins the researchers who first brought the vulnerability to its attention -- that's a start. But Microsoft also goes out of its way in those same security bulletins to stress that even critical updates involve rare circumstances under which the flaw can be exploited. I'd like the software giant to cut the legalese and simply say that this vulnerability may allow a remote hacker to take control of your desktop PC, period.

To be fair, all the fuzzing in the world still won't uncover all the potential vulnerabilities. People will still think of some attack vector that no one else has. Don't believe me? Days after Microsoft released 12 security bulletins, eight of which were deemed critical, covering 21 vulnerabilities in all -- someone released a Trojan based on 0-day, an unreported, unpatched Excel vulnerability. The previous month, someone released an unreported, unpatched Word flaw the day after Microsoft's May's updates. Criminal organisations are working 24/7 looking at products from Microsoft. Wouldn't it be better if security researchers outed these vulnerabilities first?

It comes down to responsiveness
So far, in the first five days of July, Moore has released three Internet Explorer flaws, and one each for Firefox and Apple Safari. Of those IE flaws, one had already been patched before the post. The Firefox flaw was also previously fixed. So Moore's point isn't to trash the respective browsers but to call attention to the fact that the only reason you're surfing the Net today is because you're using an Internet browser, and some of those browsers may have flaws, some of which may be critical.

Currently, security vendor Secunia reports that Mozilla Firefox 1.x has had 33 vulnerabilities reported, with 4 still outstanding (the oldest is from August 2004); Apple Safari has had 4 vulnerabilities reported, with 2 still outstanding (the oldest is from November 2005); and Internet Explorer 6 has had 106 reported vulnerabilities, with 21 still outstanding (the oldest dates back to November 2003). I say hold Microsoft's feet to the fire; if the software giant wants Internet Explorer to be the number one Internet browser, then it should fix its flaws in a more timely fashion.

Should security researchers go public with vulnerabilities in popular products after first contacting the vendor -- yes or no? TalkBack below.