X

Microsoft's security challenges

The software giant's new chief security strategist, Scott Charney, settles into his job just as Bill Gates elevates Trustworthy Computing to top priority.

9 min read
What do you do when your hobby--security--becomes your job? You chase other people whose hobby is hacking.

That's the role Scott Charney has taken since he started working on security for the U.S. Justice Department in the early 1990s and, most recently, as Microsoft's chief security strategist.

In January, Microsoft pegged Charney to replace Howard Schmidt, who left the company to become vice chairman of the President's Critical Infrastructure Protection Board, a federal commission. Charney assumed his duties in April.

He arrives at Microsoft at a time when virtually everyone is thinking security, spurred by last year's terrorist attacks on the Pentagon and the World Trade Center. Charney testified about cyberterrorism last month on Capitol Hill.

Microsoft also is trying to take security more seriously. In a January e-mail to Microsoft employees, Chairman Bill Gates termed security the company's top priority. More recently, the company announced Palladium, a new security initiative blending Windows software with hardware designs from various chipmakers. CNET News.com recently spoke with Charney.

Q: I understand security was not your original calling, so to speak.
A: I'm a lawyer by training, so I'm a little bit like a fish out of water, although I'm more technical than most lawyers. I started my career as a prosecutor in Bronx County, New York. Then I joined the feds and went to Honolulu for three years. So, I haven't exactly had the normal career path. But it's been a very interesting ride.

Now you're at Microsoft.
Well, what happened is that after I left Honolulu, I went to the Justice Department down in (Washington) D.C. By fortuitous events, in February 1991, I was tasked with deploying the department's computer crime initiative. For the next nine years, I went around chasing hackers and doing crypto policy. When your hobby becomes your job, you're a very lucky guy.

So how is this going to help you in your new responsibility?
My experience in government is very helpful because of a couple of things. Obviously, Microsoft has a lot of relationships with the government--particularly in my area, which is critical infrastructure protection. Sometimes you meet with the government and they say, "You need to share the information." And the industry people say, "You don't understand the business difficulties in sharing information."

One of the things we say to customers is that security will break applications.
Because I've been on the government side, I understand why the government feels so passionately about it. Before coming to Microsoft, I spent two years at PricewaterhouseCoopers doing security consulting for the Fortune 500. I also have a sense about the business concerns about sharing information. So I kind of have the advantage of seeing it from both sides. I understand the passion of both sides. I think that helps me be effective in trying to bridge and find common ground.

As you move into this job, then, what do you see as some of your top priorities?
We're doing this thing called "Trustworthy Computing." It's an evolving concept. We've come up with our new paradigm, SD3, which is "Secure by Design, Secure by Default and Secure by Deployment." Secure by Design is doing the design better at the outset and making the products more secure. For example: The Windows security push, where we sent developers back to school (and) started doing way more extensive code reviews, thread modeling and designing more security.

Secure by Default is that we start shipping products that are locked down by default. When a customer loaded something, he wanted it to work, so we turned on everything. Even if the user might not be using everything, everything was on. That meant the product had a very broad attack surface, even for the things the person was using. So now we're going to start locking things by default.

Secure by Deployment is doing a better job at patch management.

How did customers respond to the Secure by Default approach?
It's interesting. When we started shipping IIS (Internet Information Server) 6 in beta with things locked down, customers would say, "You broke my applications." We said, "That's because everything's closed. You really need to go in and decide what should be open." So they said, "Oh, here's what we want. Why don't you put in a button that turns everything on?" Well, that just defeats the whole purpose.

So are you finding problems changing customers' attitudes about security?
Customers today are far more security conscious than they were three or four years ago. One of the things we say to customers is that security will break applications. If you do better security, there are times your apps will stop working. The problem is, what's the alternative? The alternative is to maintain the status quo, where everything is open. But the fact of the matter is we live in a world where that's not an acceptable option anymore.

Internally, has Microsoft been resistant to these changes?
One of the things that went away because of Bill's memo is (resistance to the need to) change the culture. When I was in the Justice Department...I would exhort (people) to do better security. People would see me coming, and they would put garlic around their necks. So when I came to Microsoft in April, I expected. "Here's the new guy in security, so get out the garlic." But I got out there, and people said: "Oh, you're the new security guy. We need to talk to you. Trustworthy Computing is the thing."

So the cultural issues are largely taken care of. The hard part then has been actually implementing this security. You have to develop it (and) develop it securely, and the threat keeps changing. On Sept. 10, if you asked, "What's the likelihood of four planes being hijacked, three hitting buildings and the World Trade Center collapsing?" everyone would have said the risk is zero. That's just not a threat. But on Sept. 11, everyone said 100 percent.

So how long will it take to change all of this?
It's definitely a multiyear process. We're starting to see some progress. People say to me that we're issuing patches all the time. And I say that if we're doing all of these extensive code reviews and threat modeling, hopefully we would find stuff and fix it. When you start trying to build more robust code, well, how do you get that code to market? You'll start to see some of the results of the security push in (Windows) XP Service Pack 1. Then in two or three years, when you get to Longhorn, the next version of the (Windows) operating system, you incorporate more of what you're learning and seeing.

Microsoft has been fiercely criticized about security.
It's interesting. People say that we're not as secure as Linux. If you go to Bugtraq (a security news group) and stuff, we're actually all in the same pack together. However, it's not sufficient for Microsoft to be in the same pack. If you write something that penetrates a Novell NetWare system, how many customers are going to be affected? A lot. But if you write something for Windows, a lot more. Therefore, with our market share comes added responsibility. We have to own up to that and do it right.

Are you issuing more security alerts because you are looking for more bugs?
By pulling the Windows developers off--8,000 (to) 8,500 people--for a couple of months to look for stuff, if you put that many eyes on it, they're going to find stuff. There's always been an issue on computer crime statistics. When you go to victims and say there is a huge problem, they say they've never been hacked. And I say, how do you know? If I steal your car, how do you know? If I go out to the parking lot, it's gone. But if I hack into your system and hack into your customer list, how do you know? You don't, because when you go and look--there it is. The metrics aren't the same. It's not like physical crime--assault, robbery, burglary--where there's physical evidence. You really have to look most of the time.

With our market share comes added responsibility. We have to own up to that and do it right.
As we started deploying intrusion detection in government, we did a prototype in 1991 down at the University of California at Davis. Using that prototype intrusion detection system, we found 111 verifiable intrusions. We went to the systems administrator, who had seen three. The system administrator isn't looking for abuse. When you automated the process, the numbers went through the roof. When you look, you see--and one of the things we're doing is aggressively looking.

You came to Washington to testify about cyberterrorism. What is Microsoft's concern there, in areas such as combating terrorism or industrial espionage?
The problem with investigating hackers is that you never know what you have. If you think it's terrorism or industrial espionage, generally you reach the decision based on who's attacking you and why. But in an Internet attack, the one thing you don't know is who is attacking you and why. Many years ago, when I was at the (Justice) Department and we were gearing up for air strikes against Iraq in the early '90s, we saw a huge penetration come from the Middle East. We thought it might be an information-warfare attack trying to disrupt our pending military action. It was coming from the Middle East, but it turned out to be two juveniles from Cloverdale, Calif., who were looping through the Middle East and attacking the DOD (Defense Department). So you can't just worry about terrorist attacks--you have to worry about all kinds of network abuse.

Ultimately, you might turn up a terrorist. Our concerns about terrorist attacks in particular are twofold: One is that they could disable critical systems, do a lot of damage like shutting down a power grid. The second concern is that they could do something more limited--a narrow attack, coordinated with a physical attack. So, for example, if you look at the World Trade Center attack, if they had done a denial-of-service attack on the phone networks or the Internet 10 minutes before the planes hit the building, how would that have affected the response? All the emergency responders need those communications networks. There is also the issue of interdependency.

What has been your biggest challenge in your role?
One of the things that surprises me is how difficult it is to develop policies that can then be clearly implemented into technology. Sometimes everybody knows what the solution is, but getting the technology right is hard because programming is as much art as science.

When you look at patch management, it sounds easy but it's not. How do you know if a machine actually has been patched? You create a patch. Let's say you write to the registry a key that the patch equals one. What happens if the (computer user) has a failure and reloads the operating system? How does the operating system reset that key to zero? What do you do if a virus writer creates a virus and the first thing it does is reset that registry so it appears to be patched? That way the machine is fooled into thinking it's been patched for the very virus that it's not patched for. It's just an example for me, you have to really think hard about this stuff and how to implement it securely.

What can you say about Palladium?
It's a hardware-software implementation. That's important because hardware is more secure than software. It's not as easy to tamper with. If we want to get Trustworthy Computing where things are secure, we do need hardware and software implementations.

The second thing is the Palladium model, which is a couple years off, will require hardware with a different chipset. You know, a security chip. While it works side by side with Windows, it won't replace Windows. It will enable all sorts of things for both consumers and businesses. For example, you will have "trusted agents"--pieces of code that are signed by people you trust. This is a good thing because it can eliminate a lot of viruses. So if you get code and it's not from anyone you trust, you can choose to not run it.