CNET también está disponible en español.

Ir a español

Don't show this again


Exposing software flaws--no easy job

Security researcher Christopher Soghoian reflects on the hard work that comes after finding a vulnerability.

Graduate student Christopher Soghoian is no stranger to controversy.

Last year he made a name for himself by making public an exploit for printing airline boarding passes, much to the dismay of the Federal Aviation Administration, the Transportation Security Administration and the Department of Homeland Security. He then went on to expose a phishing scam at Indiana University and a man-in-the-middle attack which made use of the Bank of America key site authentication system.

Just last week, Soghoian went public with a word of brand new vulnerability that affects some popular extensions used by the Firefox browser.

Although it may look like fun to poke holes in systems used by millions, the process of vulnerability disclosure is often fraught with peril. Lately companies have been suing researchers for violating the Digital Millennium Copyright Act (DMCA) because they need to reverse-engineer proprietary code. In response, some researchers have foregone notifying vendors altogether and started posting vulnerabilities to public mailing lists.

Others seek shelter behind independent organizations that coordinate vulnerability disclosure between vendors and researchers. Still others simply sell their discoveries to security companies like iDefense, which take on all the risks and work with the vendor to address the vulnerablities directly.

In the case of the Firefox flaw, Soghoian chose to contact the affected vendors himself and decided to give each 45 days to resolve the issue before he went public. CNET spoke to Soghoian about the delicate process of reporting a security vulnerability and any changes he'd make the next time out.

Q: The vulnerability you reported is not within the Firefox browser itself but on extension servers not under Mozilla's control. How did you come upon this vulnerability? Was it a hunch?
Soghoian: In this case it was basically a hunch. I wanted to see what Firefox was doing when it was loading. I had a suspicion that maybe it was calling home and revealing private information or something along those lines. I just wanted to see what Firefox was doing and who it was talking to when it started up. So I basically just started the browser up and watched the communications between the browser and anywhere else on the Internet.

Vendors should at least have a fair chance. Especially when we're talking about software where millions of users are potentially at risk.

There were a few Secure Socket Layer (SSL) connections to various servers run by Mozilla for the extensions that I had, but there were a number of plain text requests going out to Google and other companies for other extensions that I had. Those stuck out like a sore thumb. It wasn't so much that they were sending information, it was that they were downloading information. And they were downloading updates, so that was a much bigger issue. (After) looking at it with a network sniffer, about two hours later I had a working exploit. So it was very, very quick to go from a hunch to a working code.

Because you're talking about intercepting a call out from your browser to an unencrypted server on the Internet, is this attack more likely to occur if you're using a wireless laptop?
Not necessarily true. A wireless connection is really, really easy for an attacker to take over because of the fact he doesn't need to be sitting near you. If you're using a (LAN) hub environment as opposed to a switch environment for your network, then this attack is possible--anyone who is plugged into the same hub can do this. If you're using a home router, be it a wireless router or a wired router, and you have not changed the default password, it is possible, using what's called a drive-by phishing attack, for an attacker to change your router's setting when you visit a malicious Web site. In that case, the attack could work as well. This scenario of someone sitting in a Starbucks and getting infected is the most common scenario, but there are others.

Let's say I have a hunch about the security of a product and it proves to be true; I can write an exploit and demonstrate it. Now what do I do?
There are many things you can do. It sort of depends on which school of thought you follow. There's the Responsible Disclosure School and the Full Disclosure School. If you follow the full disclosure school of thought then you post the information and potentially a proof-of-concept exploit to a mailing list on day one to get your name out there.

That is not what you did.
Soghoian: I don't agree with that approach in most cases. Contrasting to the airline situation I was involved with last year, where it was an exploit that had been known about for three years, and had been ignored, I feel like vendors should at least have a fair chance. Especially when we're talking about software where millions of users are potentially at risk. I feel an obligation to give the vendors a chance to fix things.

Whenever you follow the responsible disclosure route, you're always taking a risk because the company knows your name and they can sic lawyers on you.

But there's only so much of a chance that we should give them. I gave Google and Yahoo in this case 45 days to fix it and they ended up not fixing it in that time frame. And then after it was made public, suddenly they rushed. Once it's out there, they definitely have a strong incentive to fix it--the shame factor, I think--but at least for my own peace of mind I wanted to give the vendors a heads-up in an attempt to fend off criticism after the fact. I wanted to be able to claim at least (that) I did the right thing.

Why give them 45 days? You didn't pick that number out of the air, did you?
The , a security organization run out of (Carnegie Mellon University), have a 45-day deadline, and that seemed about right to me. One of the other downsides to responsible disclosure is that someone else might come up with the attack also, at the same time. In this case, (my disclosure) came out after a couple of other people had figured out within the last couple of weeks, and they were preparing to go public, too. So the more you wait, the chance goes up that someone will try and steal your thunder.

So the burden's entirely on you, the researcher. You find the vulnerability, now you must behave responsibly, yet run the risk of others exploiting it in the meantime. Shouldn't there be a central repository you can tell when you discover a vulnerability?
You can tell CERT, or if you want to cash in, you can tell iDefense, or one of those other organizations, and they'll take it off your hands. I read an academic paper about a guy who'd sold an exploit for $50,000 to a branch of the U.S. government. In that case the U.S. government didn't tip anyone off; they kept it to themselves. But there are many things you can do.

In this case I wanted to maintain friendly relations with the companies. I worked at Google last year. I traveled around the world with money that Google put into my bank account, essentially, so I felt some obligation to treat them with respect. So I chose to contact the vendors myself instead of giving information to CERT or selling it on the open market. For me that meant there was a lot more work. I had to do the coordination myself.

There's also some risk involved. There's

Whenever you follow the responsible disclosure route, you're always taking a risk because the company knows your name and they can sic lawyers on you. So in my case I felt confident because I was going to be in Europe, beyond the reach of a trivial lawsuit, but there's always that risk. And if you're doing something for Apple, which has made a name for itself by suing researchers, then more than likely, if you're going to go the responsible disclosure route you're going to at least do it anonymously, unfortunately.

If this were a vulnerability with Apple, I wouldn't have taken the chance because I'm almost certain Apple would have sued me.

Companies often claim that reverse engineering violates the DMCA, right?
They use the DMCA as a means to an end. Essentially what they're trying to do is silence the researcher, and the DMCA is the tool of choice. If the DMCA didn't exist, they'd cite some other rule, maybe the Computer Fraud and Abuse Act or something else. The DMCA basically says you can have an encryption algorithm and no matter how weak it is, if it's protecting anything that's copyrighted, then anyone who reverse-engineers it is in theory breaking the law. The codes that come in cereal boxes in theory could come under the DMCA. We have to recognize that it's there, but in terms of respect, I don't think that law gets much credit in the industry.

Do you think the path you took to disclosing this Firefox extension vulnerability works? Would you go this route again?
It was definitely far more work for me to coordinate the vulnerability disclosure than to tell CERT or to sell it on the open market. The problem for me was that Google, especially, stonewalled me. I sent over five or six e-mails to them, and for 30 days I heard nothing back from them. It's really frustrating for the researcher because you're trying to do the right thing, and all you want to hear is, "We're working on it, we'll get back to you soon." You want to have some kind of heartbeat so that you know they're alive and they're working on it. And you don't want to feel that you're just being stonewalled.

So it's really frustrating when the companies just do not respond to your e-mails at all. It gives you food for thought, and it disincentivizes you to go to them the next time when you know for a fact that iDefense will return your e-mails and will actually send you a check in the mail, too.

I'm at the beginning of a Ph.D. in computer security, so there will be other vulnerabilities down the road. If this were a vulnerability with Apple, I wouldn't have taken the chance because I'm almost certain Apple would have sued me. Google has a very good reputation in the security community for responding to things. But up until now, most those have been issues that were made public, and Google scrambled to fix them in a day or two. There isn't too much information out there on Google's response to responsible disclosure and, so, yeah, this experience has been eye-opening.

Some vendors didn't respond.
There is another responsible disclosure policy out there that's quite respected by researchers--it's called the RFP policy. And that stipulates that the vendors should communicate with the security researcher every five days by e-mail, and failure to do so will result in immediate publication. Now I didn't want to go down that route because I thought it was a little bit harsh, but next time around, when I send the initial e-mail disclosure e-mail, I think I'm going to say this is the policy I'm following--if I don't hear from you every five days, I will go public.

(The multimillion-dollar firms have) gotten themselves into this mess because they wanted users to download stuff directly from them.

This is my first time divulging a vulnerability. I didn't really know what I was doing, so I had to learn it on the job. I think next time around I'm going to be much clearer with what the rules are and what the consequences are for lack of communications, but I really feel like if the companies want researchers to come to them and to not publish it on day one, or not sell it on the open market, they need to make it as easy as possible and they need to incentivize researchers. I'm not talking about giving away T-shirts here. Being reasonable and responding to our e-mails is a bare minimum, given that we are doing them a service, a free service in many cases.

It's been two days since you went public. Have you since heard from any of the vendors?
I have. Some have come out of the woodwork., a part of Yahoo, left comments on my blog and several other journalists' blogs stating that they actually fixed the issues before I went public. Somehow the news filtered through the Yahoo security team and the guys patched things and got it out on time, so they were actually not vulnerable the day I went public.

Google tried to fix things. They patched some of their systems, but they're still exposed. As of 1 a.m. last night they didn't fix everything, so users are still vulnerable, including the new Google Gears extension, which I guess is getting a lot of publicity. The entire Google family of extensions is still vulnerable. Yahoo e-mailed me today (Friday, June 1) and told me they're expecting to have a fix out at some point in the future, but they're scrambling to find enough machines to host the updates. The reason being that when you shift from a regular HTTP to an SSL-enabled Web server--think about 2 to 3 million customers querying it every day--that's a lot more CPU that's required. So if they had a hundred machines serving up those updates before, they're now going to need two or three hundred machines. So this an actual bump in CPU power that's required. I believe Yahoo is now scrambling to find enough machines for that. I haven't heard back from any other vendors, though.

The extra CPU required to patch this--could that be why some vendors haven't?
Most of the companies that we're looking at here are multimillion-dollar firms. I'm not too concerned that AOL or Apple or are not going to be able to pay for a few extra machines. Those firms that cannot afford that, the extension servers are available to them whenever they want to use them. (The multimillion-dollar firms have) gotten themselves into this mess because they wanted users to download stuff directly from them. They wanted to control the eyeballs 100 percent of the way, they wanted to be able to monitor downloads and everything else, so now they are paying the consequences for it. Should they decide it is too much trouble for them, they can always go back to using the Mozilla servers.

The only aspect that's a little bit complicated is that there are terms and conditions with having your extensions hosted on Mozilla, and you must not disable the update prompting...If you have an extension hosted on the Mozilla servers, users must be prompted when the extension is updated. Google in particular disabled the extension update notification so that the extension would download secretly without the user ever being told. That kind of behavior will get you thrown off the Mozilla servers.

Any reason why Google would do that?
There are lots of reasons why. People are more likely to update when they don't have to choose to do so. From a security standpoint it's probably a good idea to have people running the latest code, but if the update process itself is vulnerable then that puts people in a lot of danger. Someone made a management decision there and in this specific case it came out wrong.