X

Decoding the lessons of Slammer

newsmaker Mike Nash, vice president of Microsoft's security business unit, takes stock of the software maker's war on worms and viruses.

Robert Lemos Staff Writer, CNET News.com
Robert Lemos
covers viruses, worms and other security threats.
Robert Lemos
8 min read
A viral one-two punch--the Code Red and Nimda worms--convinced Microsoft in mid-2001 that security needed to become its top priority. That decision led directly to the creation of the company's Trustworthy Computing initiative.

Company Chairman Bill Gates laid the groundwork for the program with an ambitious memo in January 2002 to employees, challenging them to improve the privacy and security of Microsoft software. The company subsequently halted much of its product development while about 8,500 developers were trained in secure programming and then reviewed the majority of the Windows code for security errors. Microsoft says the entire effort cost some $100 million.

But hackers continue to find holes in Microsoft's defense. In January, the Slammer worm hit. This time, not only did customers get infected; Microsoft did, too.

Mike Nash, vice president of the security business unit at Microsoft, is the executive responsible for the security component of Trustworthy Computing push. CNET News.com recently spoke with Nash about the effect of the Slammer worm on the Trustworthy Computing initiative and where Microsoft expects to take its security program in its second year.

Q: Now that Trustworthy Computing is in its second year, where does it go from here?
A: The first year of Trustworthy Computing was about dealing with issues that could correct for and mitigate primary areas where customers have pain. At the same time, (it was about) making investments in core infrastructure that would both help mitigate customer pain but also be the right investment in the long term. In the second year of Trustworthy Computing, I see it very much as a deepening and broadening of the same set of issues.

Does the Slammer incident invalidate what you did?
At some level, you could argue that Slammer illustrates that the work we've been doing around patch management and our focus on things around Windows needs to extend beyond Windows so we have the same capabilities for products like SQL Server.

At the same time, there is a little bit of irony here in the sense that one of the things that the SQL Server folks did...was to not only focus on the Trustworthy Computing process for new products but also focus on going back and applying that process to existing products. So Exchange Server and SQL Server both went back and did security pushes against their existing versions of the product. Ironically, in the case of SQL Server, that updated version--Service Pack 3 for SQL server--shipped the Monday before the Friday that Slammer was launched.

Then what is the lesson taught by Slammer?
Remember, there are two things that we are talking about: The security patch that we shipped last summer and the service pack that shipped four days before Slammer. I would not have expected people to have applied the service pack, because it was four days before. The evaluation period has to be longer than that. The key lesson of Slammer--maybe it's a re-lesson of Slammer--is our work is not done when the patch is available. Our work is done when the patch is installed on the majority of customers' systems.

The key lesson of Slammer--maybe it's a re-lesson of Slammer--is our work is not done when the patch is available.
So what needs to happen in order for patches to get installed? Do you need to assure customers somehow that the patch won't break their system? Or do you somehow force them to upgrade sooner?
I don't think we force companies to do anything. The thing we need to do is we need to make it easier for customers to install those patches both when they do it sitting at a system and also with automatic techniques like we have for Windows Update and Software Update Service.

We also have to make sure that the critical things around patching which customers are worried about (get taken care of.) Does it maintain capability with their existing applications? Is it easy to install? Does it have the ability to roll the patch back if you install it and decide you don't like it? A lot of the infrastructure we have built for Windows now can be extended out to things like SQL Server.
Focusing on Windows first was the right thing to do. I think the key thing here is to make it easier to install things like a security patch, so the customer has less work to do. And I want to be really clear: It's our issue. It is our job to make sure that friction goes down.

With Microsoft having problems of its own internally from the Slammer worm, do you think people have started to question Trustworthy Computing?
The most important part of the Trustworthy Computing conversation is that Trustworthy Computing is a journey. We didn't mean to set the expectation a year ago--nor do I think we did set the expectation a year ago--that you could simply flip a switch and Trustworthy Computing would be turned on for everything. I think that we made progress in some areas. We learned some things, and we know we have more work to do this year...Very clearly, patches have to be easier to install. That's an essential part of that.

How do you solve this patching problem? Is it a relationship issue that Microsoft needs to solve or a security issue that your customers need to solve?
There are two, maybe three things that Microsoft needs to do. We need to work with companies to help them build security plans, and patching is an important part of that. We have guidance that we need to do a better job of sharing with our customers. Two, we have to do a better job of creating tools that make it easier to install patches. And the third, which may sound simple but I think it's important, is increasing the overall awareness about the importance of installing patches. People are more aware now than they were a year ago, but clearly we have more work that we at Microsoft need to do to get more people to understand the importance of installing patches.

It's our issue, very clearly.

There are a lot of people out there who understand a lot less about software than Microsoft. And I think the feeling among some of them is a sort of fatalism that, if Microsoft can't protect themselves, then no one can.
The key thing here is that systems that were patched with any one of the five ways that customers could have protected themselves from Slammer were not affected by Slammer. I think on some level that provides good evidence that proper patching can be effective.

You have to realize that Microsoft is a special environment where the barrier to installing a piece of Microsoft software is exactly zero. Any employee of the company can basically install any product they want on their system for testing and development purposes. There are customers that have this ability as well. Microsoft just tends to do it more because of the culture and environment of the company.

Very clearly, patches have to be easier to install.
What we realized here is that in the past, not having a patch installed affected your machine; therefore if it was a test machine, the importance of patching was relatively low. What Slammer taught us is that even if the value of the machine is low, the need to patch could be high because of its ability to impact systems beyond the machine that was or wasn't patched. That was important learning for us internally.

So it's analogous to home users being relatively unimportant systems out there to the Internet. But if they become a zombie for an attack then they actually become more important?
I don't think it's similar to that for the following reasons. To the home user, the continuous operation of their system is valuable. Therefore, the need to protect that home user is essential. That home user is as much a participant in Trustworthy Computing as an enterprise. If that person at home is going to start trusting software and do online trading and that kind of work, the software running on that home user's machine must be trustworthy.

I have a bunch of test machines under my desk. I install the product-of-the-week from Microsoft just to mess around with it and see what it's like. The value of that system underneath my desk is pretty low. I don't care if someone "fdisks" the machine and deletes all the data on it. I do that twice a week. (Editors' note: Fdisk is the command-line program used to format hard drives.) While the value of the trustworthiness of that machine on its own is pretty low, the value on the network at Microsoft is relatively high. Even though that machine is not valuable, its ability to hurt other parts of the company was relatively high. Therefore, it needed to be more trustworthy than we thought.

So how do you solve that? Do you put them on a closed network that isn't accessible to the Internet?
One is we increase the awareness of the need of people inside the company to follow the same SD3 law. (Editors' note: SD3 stands for Secure by Design, Secure by Default, Secure in Deployment--three tenets of creating secure products under Trustworthy Computing.) We make sure that people understand the need to ensure that products are patched properly--even if they are only test machines. It means we have to make it easier for people to install the patched version on their desktops.

Second, we need to make sure that some of the things that were turned on in our default configurations on our corporate network are not on by default. The machine running under my desk probably had port 1434 open. (Editors' note: 1434 is the software address used by part of SQL to connect to the network.) Third, while we did block port 1434 at the edge of our network, blocking 1434 between buildings on our network--which we did immediately after Slammer--we should have done before Slammer.

In the coming year, there will definitely be more attacks. What do you think companies should be doing and how can Microsoft help them do it?
The most important thing is for Microsoft to do the five things that customers have told us to do. Customers told us to address issues in our products before they ship. They want us to continue to review products after they have shipped. Customers tell us that we need to do a (better) job of being both responding to external identified issues with patches that are both high quality and available quickly...Microsoft must make it easier to deploy patches. And...Microsoft must provide other tools in the form of both software and guidance to help customers be secure.

The big lesson around Slammer in terms of what we have done and what customers should do are really about having a security strategy for your organization, which should include a patch management strategy.

Sounds as if you feel the need to get more out in front than be reactive.
We have been lucky that issues like Slammer--(which use) exploits against known vulnerabilities--(were found by) others outside of Microsoft...and we had a chance to fix them. We realize that we need to be more proactive to make sure we understand vulnerabilities for things that have not been found externally (and) to make sure we don't have someone who is finding a new vulnerability and then exploiting it. The best way to respond is not to be just reactive but to be proactive.