X

Gary McGraw on developing secure software (Q&A)

How secure is that software? McGraw and team of security experts have assessed how well big companies develop products with security in mind.

Elinor Mills Former Staff Writer
Elinor Mills covers Internet security and privacy. She joined CNET News in 2005 after working as a foreign correspondent for Reuters in Portugal and writing for The Industry Standard, the IDG News Service and the Associated Press.
Elinor Mills
3 min read
 
Gary McGraw, chief technology officer at Cigital and a co-author of the BSIMM study. Cigital

For more than a decade, Gary McGraw has been pushing companies to write better code so that the software we all rely on for desktop computing, Web surfing, and Internet communications works the way it should. That includes making sure it doesn't have defects that attackers can exploit to steal data and otherwise wreak havoc.

In 2002, Microsoft got the message, or rather got sick of hearing complaints from its customers about holes in its software that were letting high-profile viruses onto Windows desktops and corporate networks. The company launched its Trustworthy Computing initiative and is now a leader in secure software development and how to do things right, McGraw said.

As chief technology officer at consulting firm Cigital, McGraw decided to analyze Microsoft's Security Development Lifecycle and to compare that with what other companies do. He and some cohorts got a rare look inside 30 firms, including Microsoft, Adobe, Google, Bank of America, Intel, Sallie Mae, Nokia, and Capital One. While their study, entitled "Building Security in Maturity Model" (BSIMM) and due to be released on Wednesday, ranks the companies according to their secure software development practices, it does not make the rankings public.

With start-ups flocking to the Internet and security problems hitting popular social sites and Web apps, the concern over the lack of secure software has only become heightened.

McGraw talked to CNET about what he has learned through his behind-the-scenes look inside the study participants and what that means for safe Web surfing in the future.

Q: So, tell me about the study.
McGraw: We call it "B-Simm" for short. It's a study of 30 companies and we looked at their software security initiatives. That is how they try to figure out how to do a better job of building security into their software by training developers, getting the right kinds of tools and, most important, setting up the right kinds of activities.

What did you do for the study?
McGraw: We went out and met in person with the executives in charge of each software security initiative in all 30 firms and we gathered data and built a model that describes the data very carefully. It's built by observation, which makes it novel from a computer science perspective. A lot of times in computer science people have an idea and then grab dribs and drabs of data to justify the data. In this case we got the data first.

"Consumers have for a long time had an implicit demand for security that hasn't been made explicit, but I think that's changing. People are sick of having insecure software and sick of having to have to get antivirus software because of all this broken software on their PCs."

Did you give grades?
McGraw: We observed 109 activities in all the data. We determined whether or not we observed that activity in a particular firm and then kept track of how many times we saw an activity. We ranked the activities and we know which are more popular, which were observed more often in the model.

So give me examples of the types of activities you're talking about. <
McGraw: Some are pretty simple like training your developers in an introduction to software security development course, or use a static analysis tool to review your code and remove vulnerabilities. Some are pretty complicated. A level 3 activity of the rocket science type would be to form a science team to look for new software vulnerabilities of a type that have never before been seen on planet Earth and eradicate those. There are actually a couple firms that do that. I can't say who they are. In order to get access to this incredibly rich data in these firms, we had to agree to keep the data on particular firms under wraps. There are 15 activities that were incredibly common and you can think of those as the core of software security activities.

What's the most important practice?
McGraw: One that was observed 100 percent of the time was having host and network security basics in place before starting to work on software security. Don't worry about software security if you don't even have a firewall or network security person on your staff.

Why is all of this important? What does the average consumer have to gain or lose?
McGraw: The real problem from a consumer perspective is if you have a piece of software, there's no way for you to tell whether or not it's secure. Most consumers would like to have software that is secure, that can't be hacked, and they would like to use a browser that wouldn't allow bad guys to hack them with impunity the way things are now. In some sense, security is invisible property and we're trying with the study to make security much more visible. But the only way to do that and retain technical accuracy is to talk about the kinds of activities firms are doing when they carry out good software development. There's a nice history lesson we can draw from. Microsoft got started on the Trustworthy Computing Initiative about a decade ago and they've made a lot of progress in the way they approach security. Microsoft shares what they do with the publication of the Software Development Lifecycle and in books executives there have written.

So, how many of those 30 companies are doing a commendable job?
McGraw: A majority of them are doing a reasonable job. There are a lot of firms that aren't doing anything for software security, and it's better to be doing a few things than nothing at all. One thing consumers could do is ask for things like BSIMM scores from vendors and see if they will share the information about what they are or aren't doing. To some extent, the BSIMM has become a de facto standard for measuring software security initiatives. The reason we came up with this measurement tool was so companies could improve their own software security initiatives. So it's not for consumers. But, I bet every single reader of yours uses software that's produced by one of these companies.

I don't see Facebook or Twitter on the list.
McGraw: Sadly, they're not on here.

How about Mozilla?
McGraw: We've talked to the Mozilla guys, but we haven't carried out a measurement yet.

This is all voluntary, though, right? Why would a company do this if it didn't have to?
McGraw: I think that companies are coming to realize that consumers expect security. Consumers have for a long time had an implicit demand for security that hasn't been made explicit, but I think that's changing. People are sick of having insecure software and sick of having to have to get antivirus software because of all this broken software on their PCs. They would prefer to have the software just built properly. Some companies that realized it a decade ago have been working hard to do a better job.

"One part that has been overemphasized is the role of social engineering and using the victim's name to get them to click on a link. But the other half is what happens when you get them to click on the link?"

What role do the computer users play? For instance, a lot of the attacks these days use social engineering to trick people into trusting a message or Web link they shouldn't.
McGraw: It's like crashing your car. A long time ago, before the National Transportation Safety Board got involved in analyzing car crashes, cars were a disaster from a safety perspective. Sometimes the brakes would fail or the wheels would fall off. Now, cars are pretty reasonable from a safety perspective. But they're really safe if you wear your seat belt. We can not make sure that everything you do on the Internet will be secure just by having more secure software. If you choose to do something incredibly silly it's going to be problematic from a security perspective. There will always be people doing high-risk activities they shouldn't do.

Do we need a National Transportation Safety Board equivalent for software and to enforce a type of seat belt law for computer users?
McGraw: Companies in the BSIMM study would argue that we don't need that yet because they are trying to do the right thing. The BSIMM measurements do make that seem like a reasonable statement. The interesting thing is that in order to have something like the National Transportation Safety Board we would first have to have a way for measuring software security initiatives and the BSIMM is that measurement.

So, can you tell me which are the best companies in terms of secure software development?
McGraw: I can't tell because I want these companies to continue to share their data with us so we can report what we're actually seeing out there. The good news is software security as a discipline is growing quickly. I think consumers can and should begin to demand more secure software, to ask for some evidence that software is more secure, and to reward with their dollars those companies that are doing a better job.

So, what about the tradeoff? If I get a really secure product can I still expect full functionality and interoperability?
McGraw: You actually can. A lot of people would argue that there's a tradeoff between security and functionality. But the fact is, getting rid of your functionality is not what makes you more secure. The problem is if we write software sloppily or design it poorly it will be riddled with defects that will allow an attacker to take advantage of that. You can still have very high functioning software that is nicely secure. If you compare Windows 98 to Windows 7, not only does Windows 7 have way more functionality, but it's also head and shoulders more secure than Windows 98. That shows companies can create software that is useful, that the people want to buy but which is secure at the same time.

Would good secure software development have prevented targeted attacks like those against Google and the others late last year?
McGraw: To some extent yes, because even spear phishing through social networks often exploits some software flaw. One part that has been overemphasized is the role of social engineering and using the victim's name to get them to click on a link. But the other half is what happens when you get them to click on the link? You have to have an exploit of some sort that takes advantage of a security problem for those attacks to work. In that way, more secure software is the only way we can make progress in computer security.