X

At software giant, pain gives rise to progress

Redmond's security practices have been transformed since threats like Slammer and Blaster first wormed their way onto the scene.

12 min read
News.com special report:

Securing Microsoft: A long road

At software giant, pain gives rise to progress

By Ina Fried
Staff writer, CNET News.com
December 3, 2007, 4:00 a.m. PST

Editors' note: This is part 1 in a series examining how Microsoft's security strategy has evolved over the past decade.

REDMOND, Wash.--With a measure of pain, Matt Thomlinson recalls the summer of 2003.

"I remember buses pulling up to the Microsoft campus to shuttle engineers away from their day jobs to go work the phones down at (product support)," said Thomlinson, who heads Microsoft's security engineering efforts. "That was just heartbreaking."

The Blaster worm had just hit, swamping Microsoft's support lines with calls from angry customers.

Andrew Cushman, director of the Microsoft Security Response Center, remembers standing in Muck boots and installing a catch basin in his front yard when he got a call from an account manager. It was just days after September 11, 2001, and one of Microsoft's largest customers had just been hit with what turned out to be the Nimda worm.

War room

George Stathakopoulos, Cushman's boss, still hasn't seen the end of the movie Master and Commander. In spring 2004, he was sitting on his couch watching the film when he got the call that Sasser had hit.

Indeed, much of Microsoft's current security practices can be traced to painful lessons learned during the past decade by people whose job it is to secure Microsoft's products.

Because of the experience of Mike Nash, a vice president at Microsoft, the company finally instituted calling trees as a way to quickly reach people in an emergency. When the Slammer worm hit in January 2003, Nash had to work feverishly to track down the vice president of SQL Server, Gordon Mangione, eventually locating him at his sister's wedding in Canada. (Slammer used Microsoft's SQL Server database to propagate a denial-of-service attack.) Nash first heard reports of Slammer on the local news radio station at 6 a.m. At first, he thought he was dreaming. But as the report played a second time, he knew it was real and headed into work. "I was the second one there," Nash recalls.

Slammer also taught the company that it was not enough to have a patch; the patch had to be easy enough to deploy so that most customers would do so, lessening the chances that outbreaks would propagate so quickly. And it was Blaster that taught the company that it wasn't enough to patch a single flaw; it needed a systematic process for catching whole classes of vulnerabilities, a realization that paved the way for Microsoft's current approach, known as the Security Development Lifecycle, or SDL.

Microsoft timeline

"We've put a lot of our best people in these areas," Microsoft Chairman Bill Gates said in an interview with CNET News.com. "Still tons to be done, but you know, we've definitely made five years of progress in the last five years."

Much of the reason for that traumatic on-the-job training can be traced to Microsoft's decade-long evolution in how it and its employees deal with security. Until 1997, security was seen mainly as a set of features that the company bolted onto its software long after product design and development. The idea of securing code as it was being developed had not been considered.

IE flaws send Microsoft scrambling
That all began to change in March 1997, when the first significant flaws were discovered in Internet Explorer. Researchers at Worcester Polytechnic Institute found a vulnerability in browser shortcuts known as .LNK files. Even as Microsoft was scrambling to deal with the problem, word of the flaw hit cable television news. A few hours later, researchers at the University of Maryland found a second problem and reported it to Microsoft.

Simultaneously, the IE team, which Stathakopoulos was part of, was in the process of moving into a new building. The timing couldn't have been worse: most of their equipment was in boxes. Someone had to run to a store to buy a power supply for one of the team's laptops--the power cords had been packed away--before the battery went dead. Jason Garms, now a senior director for technical strategy, wrote the company's first security bulletin in a Windows' Notepad file and then copied it to a floppy so it could be distributed to customers.

At the time, the company didn't even have a system in place where outsiders could report security bugs directly to Microsoft engineers. The IE flaw came to light because someone had called Microsoft's support line and the matter had gradually escalated.

"We said 'This has to stop,'" Stathakopoulos recalls thinking of the disjointed system at the time. "It's not working for us."

In the aftermath of that bug, Microsoft created the Microsoft Security Response Team as well as a separate Internet Explorer security group. The company also created an e-mail address where outsiders could report potential issues.

The Microsoft Security Response Team was made up of volunteers--employees who had other day jobs, but were interested in helping out when there was a security problem.

Next page: The era of big worms



News.com special report:

Securing Microsoft: A long road

(continued from previous page)

Although those early IE flaws awakened Microsoft to the dangers posed by the scale of the Internet, it took several more waves of attacks to fully form the company's security strategy.

The arrival of Melissa, on July 23, 1999, knocked down one of the core pillars of Internet security at the time: by avoiding e-mail from unknown senders, one could avoid most attacks.

"They broke the trust between the user and his address book," Stathakopoulos said of the worm's authors.

"When we face a choice between adding features and resolving security issues, we need to choose security. Our products should emphasize security right out of the box."
--Bill Gates, in January 2002 Trustworthy Computing memo

Mass mailers like Melissa and I Love You were largely annoyances, though many companies had their e-mail systems overwhelmed by the sheer number of messages being sent by the viruses. But the threat became stronger as mass mailers started carrying payloads designed to attack, a period Stathakopoulos calls the era of "weaponized" vulnerabilities.

Two major attacks, Code Red and Nimda, hit in mid-2001, striking Microsoft's corporate customers hard and becoming a major headache for not only the security team, but also for the company's top brass.

In the wake of Code Red and Nimda, Gartner issued a report saying companies should "immediately" consider moving away from Microsoft's Internet Information Server product and over to rivals. That was another painful lesson, Cushman said. "Every single person on the IIS team took it personally that there was an outbreak." Cushman said the team felt the report was misreported, but it also led the unit to take new actions, such as bringing in Microsoft's top security experts to help train the members in writing better code, followed by a "bug bash" aimed at rooting out bad programming from the product.

In late 2001, Gates began drafting Microsoft's response, in what ultimately became his now infamous January 2002 Trustworthy Computing memo.

"When we face a choice between adding features and resolving security issues, we need to choose security," Gates wrote in his missive to employees. "Our products should emphasize security right out of the box."

But not everyone took the Microsoft chairman at his word.

"At the time I thought it was a PR initiative," said Adam Shostack, who was then working for Zero-Knowledge Systems in Montreal and is now a senior program manager at Microsoft, working on the company's secure development approach. Shostack said he changed his mind in the ensuing months as Microsoft followed up Gates' words with action.

Microsoft stopped virtually all Windows development work, and for a month all of its engineers focused on security-related work.

It wasn't a demonstration of rigorous coding practices nearly as much as it was a show of brute force designed to attack the problem at its source.

"It was 'take all the engineers and have them each go review code,'" Thomlinson said. "It was kind of the infancy of security engineering."

"You could almost see the aircraft carrier turning. It took a lot of miles and a lot of time, but now it's got the power of the aircraft carrier behind it."
--Katie Moussouris, security strategist, Microsoft

Even so, there was still a culture inside the company that attempted to play down the bugs to the outside world.

"We used to get the reports and say, 'That's not a security bug,'" Stathakopoulos said.

But when Nash was appointed to head up the security team in late 2001, he came in with a different approach: fess up and tell the world about potential security problems. "He said, 'No, you've got to be transparent (with the outside world)," Stathakopoulos said, recalling that his team looked at Nash as if he were insane.

"People already think our products are bad, and if we start talking about those issues more and more, people will think we are horrible," Stathakopoulos said he argued at the time. But Nash persisted, arguing that the company might initially take some added lumps, but over time the company would come to be respected.

Looking beyond the software industry
In building Microsoft's security response apparatus, Microsoft had to look beyond the software industry. "No one had had to figure this out before us," Nash said. One of the companies that Microsoft used as a guide was chemical maker DuPont. While not an exact parallel, Microsoft studied how DuPont reacted to train derailments.

Among the lessons it learned was the fact that emergencies occur at all hours, so Microsoft needed to be staffed more often. "It wasn't quite banking hours, but it wasn't 24 by 7," Nash recalls of the system in place at the time.

Katie Moussouris, who worked for AtStake for a number of years before joining Microsoft, said she recalls a slow but noticeable shift in Microsoft's attitudes and practices.

"You could almost see the aircraft carrier turning," she said. "It took a lot of miles and a lot of time, but now it's got the power of the aircraft carrier behind it," said Moussouris, a security strategist for the Security Engineering and Communications Group.

While the effort would eventually pay dividends, it wasn't enough to head off the era of big worms that kicked off with Slammer in January 2003.

Stathakopoulos recalls getting a call at 3 a.m. from Symantec's Vincent Weafer, saying that a known bug in SQL Server had been exploited. A bit groggy as he answered the phone, Stathakopoulos recalls thinking that the company had patched the flaw months earlier and that there was nothing more that Microsoft could do. He headed back to bed. About 20 minutes later, he got a call from his boss, Nash. Stathakopoulos was told he had better do something.

Window Snyder remembers being in a meeting the next Saturday morning when Stathakopoulos pointed to her and motioned for her to leave the room. The two headed straight to another conference room--one full of people "with fire coming out of their ears."

Next page: The beginning of Blue Hat



News.com special report:

Securing Microsoft: A long road

(continued from previous page)

"That was a very painful experience," said Snyder, who at the time was part of Microsoft's security outreach team, but has since left Microsoft and now serves as "chief security something-or-other" for Mozilla. "It was pretty intense."

Slammer was followed by Blaster and others. Snyder recalls a sense of dread that permeated the team during 2002 and 2003.

"It kind of seemed at that point like it would never end," Snyder said.

But things did shift. Mass mailers gave way to the rise of botnets--networks of computers taken over by hackers for the purpose of sending spam, harvesting credit card information or clicking on online ads. Widespread attacks fell out of favor with criminals who found there was more money to be made from targeted attacks. The moves forced Microsoft, again, to shift its approach as security threats no longer merely opened customers to the prospect of headaches and lost productivity, but also to financial loss.

"I don't want every team to have to learn those painful lessons first-hand, but yet I want each of them to get that visceral understanding of how important this is."
--Andrew Cushman, director, Microsoft Security Response Center

As threats became less a random crisis and more a fact of life, Nash realized the security team needed its own space. "It used to be we'd take over a conference room and people would say, 'We need the conference room,' and we'd say 'No, we need the conference room.'"

Although communications is an important part of emergency response, Nash decided it was necessary to build two "war rooms," allowing the engineering team to brainstorm separately from the employees who were communicating with customers and the media. The Microsoft Security Response Center was completed in June 2005. A door connects the two rooms, making it easier for people in both areas to get together when need be.

Microsoft also began to realize that it simply couldn't afford for everyone in the company to learn security lessons the hard way. It needed more people to get exposure to the threats that were out there.

The need for more dialogue with the security community prompted Snyder to suggest the idea for Blue Hat, an internal Microsoft conference where hackers would present in front of the company's engineers. The idea was controversial at first, with not everyone thinking it was such a good idea to put Microsoft's engineers face to face with the folks they blamed for many of their headaches.

Among those initially opposed was then-chief of Windows Jim Allchin, who didn't like the idea of having to sit face to face with the people who poked holes in the products his team created. Nash recalls Allchin saying to him, "Let me get this straight, you want the people hacking us and telling us the problems in our system and you want me to listen to them." Yes, Nash said, that's exactly what we want you to do.

Uncertainty among outside researchers
The outsider researchers, too, were skeptical about Microsoft's motives and commitments.

But, it proved to be a hit in both camps. Blue Hat, which was first held in March 2005, is now a twice yearly event, with the most recent one taking place over a two-day period in September. As usual, Microsoft's engineers were confronted by hackers showing a range of techniques that can be used to attack Microsoft's products. In perhaps its most confrontational invitation of the year, Microsoft invited the team from WabiSabiLabi, a group that operates an auction site where people can bid on vulnerabilities, much like collectors bid for trinkets on eBay. The enterprise is happy to sell to vendors who can patch the hole, or to people who might have other purposes in mind.

Although the presence of WabiSabiLabi's Roberto Preatoni at Blue Hat in September was unnerving, Cushman says that it's important for Microsoft's engineers to understand the current threats. (In November, Preatoni was arrested in Italy in connection with a spying investigation at Telecom Italia, where he previously worked.)

"It's very important when we build an update that it won't break anything."
--Adam Shostack, senior program manager, Microsoft

"I don't want every team to have to learn those painful lessons first-hand, but yet I want each of them to get that visceral understanding of how important this is," Cushman said. And there's nothing like having a hacker come in, he said, before correcting himself, "having a security researcher come in and demonstrate vulnerabilities in your product to bring that lesson home."

More recently, the company has started an exercise called "Defend the Flag," in which IT pros and security newbies get a day of training on setting up a Windows network before having to build and protect one themselves.

"If the network you've set up and are defending (gets) compromised because of misconfiguration or some vulnerability, you are going to remember that," Cushman said.

While many of the lessons surround ways that Microsoft needed to do more or move faster, one of the strongest lessons for Shostack was a story he heard at a previous Blue Hat about the need to proceed with caution. At the event, a colleague talked about Microsoft trying to prevent bitmap art exploits by more narrowly defining what could be in such a file. In trying to shore up security, Microsoft had also broken some files, which meant that companies that used a bit-mapped logo in their invoices couldn't print bills.

"Once a system administrator has gone though that experience, they become much more hesitant to patch," Shostack said. "It's very important when we build an update that it won't break anything."


News.com special report:

Securing Microsoft: A long road

At software giant, pain gives rise to progress

By Ina Fried
Staff writer, CNET News.com
December 3, 2007, 4:00 a.m. PST

Editors' note: This is part 1 in a series examining how Microsoft's security strategy has evolved over the past decade.

REDMOND, Wash.--With a measure of pain, Matt Thomlinson recalls the summer of 2003.

"I remember buses pulling up to the Microsoft campus to shuttle engineers away from their day jobs to go work the phones down at (product support)," said Thomlinson, who heads Microsoft's security engineering efforts. "That was just heartbreaking."

The Blaster worm had just hit, swamping Microsoft's support lines with calls from angry customers.

Andrew Cushman, director of the Microsoft Security Response Center, remembers standing in Muck boots and installing a catch basin in his front yard when he got a call from an account manager. It was just days after September 11, 2001, and one of Microsoft's largest customers had just been hit with what turned out to be the Nimda worm.

War room

George Stathakopoulos, Cushman's boss, still hasn't seen the end of the movie Master and Commander. In spring 2004, he was sitting on his couch watching the film when he got the call that Sasser had hit.

Indeed, much of Microsoft's current security practices can be traced to painful lessons learned during the past decade by people whose job it is to secure Microsoft's products.

Because of the experience of Mike Nash, a vice president at Microsoft, the company finally instituted calling trees as a way to quickly reach people in an emergency. When the Slammer worm hit in January 2003, Nash had to work feverishly to track down the vice president of SQL Server, Gordon Mangione, eventually locating him at his sister's wedding in Canada. (Slammer used Microsoft's SQL Server database to propagate a denial-of-service attack.) Nash first heard reports of Slammer on the local news radio station at 6 a.m. At first, he thought he was dreaming. But as the report played a second time, he knew it was real and headed into work. "I was the second one there," Nash recalls.

Slammer also taught the company that it was not enough to have a patch; the patch had to be easy enough to deploy so that most customers would do so, lessening the chances that outbreaks would propagate so quickly. And it was Blaster that taught the company that it wasn't enough to patch a single flaw; it needed a systematic process for catching whole classes of vulnerabilities, a realization that paved the way for Microsoft's current approach, known as the Security Development Lifecycle, or SDL.

Microsoft timeline

"We've put a lot of our best people in these areas," Microsoft Chairman Bill Gates said in an interview with CNET News.com. "Still tons to be done, but you know, we've definitely made five years of progress in the last five years."

Much of the reason for that traumatic on-the-job training can be traced to Microsoft's decade-long evolution in how it and its employees deal with security. Until 1997, security was seen mainly as a set of features that the company bolted onto its software long after product design and development. The idea of securing code as it was being developed had not been considered.

IE flaws send Microsoft scrambling
That all began to change in March 1997, when the first significant flaws were discovered in Internet Explorer. Researchers at Worcester Polytechnic Institute found a vulnerability in browser shortcuts known as .LNK files. Even as Microsoft was scrambling to deal with the problem, word of the flaw hit cable television news. A few hours later, researchers at the University of Maryland found a second problem and reported it to Microsoft.

Simultaneously, the IE team, which Stathakopoulos was part of, was in the process of moving into a new building. The timing couldn't have been worse: most of their equipment was in boxes. Someone had to run to a store to buy a power supply for one of the team's laptops--the power cords had been packed away--before the battery went dead. Jason Garms, now a senior director for technical strategy, wrote the company's first security bulletin in a Windows' Notepad file and then copied it to a floppy so it could be distributed to customers.

At the time, the company didn't even have a system in place where outsiders could report security bugs directly to Microsoft engineers. The IE flaw came to light because someone had called Microsoft's support line and the matter had gradually escalated.

"We said 'This has to stop,'" Stathakopoulos recalls thinking of the disjointed system at the time. "It's not working for us."

In the aftermath of that bug, Microsoft created the Microsoft Security Response Team as well as a separate Internet Explorer security group. The company also created an e-mail address where outsiders could report potential issues.

The Microsoft Security Response Team was made up of volunteers--employees who had other day jobs, but were interested in helping out when there was a security problem.

Next page: The era of big worms



News.com special report:

Securing Microsoft: A long road

(continued from previous page)

Although those early IE flaws awakened Microsoft to the dangers posed by the scale of the Internet, it took several more waves of attacks to fully form the company's security strategy.

The arrival of Melissa, on July 23, 1999, knocked down one of the core pillars of Internet security at the time: by avoiding e-mail from unknown senders, one could avoid most attacks.

"They broke the trust between the user and his address book," Stathakopoulos said of the worm's authors.

"When we face a choice between adding features and resolving security issues, we need to choose security. Our products should emphasize security right out of the box."
--Bill Gates, in January 2002 Trustworthy Computing memo

Mass mailers like Melissa and I Love You were largely annoyances, though many companies had their e-mail systems overwhelmed by the sheer number of messages being sent by the viruses. But the threat became stronger as mass mailers started carrying payloads designed to attack, a period Stathakopoulos calls the era of "weaponized" vulnerabilities.

Two major attacks, Code Red and Nimda, hit in mid-2001, striking Microsoft's corporate customers hard and becoming a major headache for not only the security team, but also for the company's top brass.

In the wake of Code Red and Nimda, Gartner issued a report saying companies should "immediately" consider moving away from Microsoft's Internet Information Server product and over to rivals. That was another painful lesson, Cushman said. "Every single person on the IIS team took it personally that there was an outbreak." Cushman said the team felt the report was misreported, but it also led the unit to take new actions, such as bringing in Microsoft's top security experts to help train the members in writing better code, followed by a "bug bash" aimed at rooting out bad programming from the product.

In late 2001, Gates began drafting Microsoft's response, in what ultimately became his now infamous January 2002 Trustworthy Computing memo.

"When we face a choice between adding features and resolving security issues, we need to choose security," Gates wrote in his missive to employees. "Our products should emphasize security right out of the box."

But not everyone took the Microsoft chairman at his word.

"At the time I thought it was a PR initiative," said Adam Shostack, who was then working for Zero-Knowledge Systems in Montreal and is now a senior program manager at Microsoft, working on the company's secure development approach. Shostack said he changed his mind in the ensuing months as Microsoft followed up Gates' words with action.

Microsoft stopped virtually all Windows development work, and for a month all of its engineers focused on security-related work.

It wasn't a demonstration of rigorous coding practices nearly as much as it was a show of brute force designed to attack the problem at its source.

"It was 'take all the engineers and have them each go review code,'" Thomlinson said. "It was kind of the infancy of security engineering."

"You could almost see the aircraft carrier turning. It took a lot of miles and a lot of time, but now it's got the power of the aircraft carrier behind it."
--Katie Moussouris, security strategist, Microsoft

Even so, there was still a culture inside the company that attempted to play down the bugs to the outside world.

"We used to get the reports and say, 'That's not a security bug,'" Stathakopoulos said.

But when Nash was appointed to head up the security team in late 2001, he came in with a different approach: fess up and tell the world about potential security problems. "He said, 'No, you've got to be transparent (with the outside world)," Stathakopoulos said, recalling that his team looked at Nash as if he were insane.

"People already think our products are bad, and if we start talking about those issues more and more, people will think we are horrible," Stathakopoulos said he argued at the time. But Nash persisted, arguing that the company might initially take some added lumps, but over time the company would come to be respected.

Looking beyond the software industry
In building Microsoft's security response apparatus, Microsoft had to look beyond the software industry. "No one had had to figure this out before us," Nash said. One of the companies that Microsoft used as a guide was chemical maker DuPont. While not an exact parallel, Microsoft studied how DuPont reacted to train derailments.

Among the lessons it learned was the fact that emergencies occur at all hours, so Microsoft needed to be staffed more often. "It wasn't quite banking hours, but it wasn't 24 by 7," Nash recalls of the system in place at the time.

Katie Moussouris, who worked for AtStake for a number of years before joining Microsoft, said she recalls a slow but noticeable shift in Microsoft's attitudes and practices.

"You could almost see the aircraft carrier turning," she said. "It took a lot of miles and a lot of time, but now it's got the power of the aircraft carrier behind it," said Moussouris, a security strategist for the Security Engineering and Communications Group.

While the effort would eventually pay dividends, it wasn't enough to head off the era of big worms that kicked off with Slammer in January 2003.

Stathakopoulos recalls getting a call at 3 a.m. from Symantec's Vincent Weafer, saying that a known bug in SQL Server had been exploited. A bit groggy as he answered the phone, Stathakopoulos recalls thinking that the company had patched the flaw months earlier and that there was nothing more that Microsoft could do. He headed back to bed. About 20 minutes later, he got a call from his boss, Nash. Stathakopoulos was told he had better do something.

Window Snyder remembers being in a meeting the next Saturday morning when Stathakopoulos pointed to her and motioned for her to leave the room. The two headed straight to another conference room--one full of people "with fire coming out of their ears."

Next page: The beginning of Blue Hat



News.com special report:

Securing Microsoft: A long road

(continued from previous page)

"That was a very painful experience," said Snyder, who at the time was part of Microsoft's security outreach team, but has since left Microsoft and now serves as "chief security something-or-other" for Mozilla. "It was pretty intense."

Slammer was followed by Blaster and others. Snyder recalls a sense of dread that permeated the team during 2002 and 2003.

"It kind of seemed at that point like it would never end," Snyder said.

But things did shift. Mass mailers gave way to the rise of botnets--networks of computers taken over by hackers for the purpose of sending spam, harvesting credit card information or clicking on online ads. Widespread attacks fell out of favor with criminals who found there was more money to be made from targeted attacks. The moves forced Microsoft, again, to shift its approach as security threats no longer merely opened customers to the prospect of headaches and lost productivity, but also to financial loss.

"I don't want every team to have to learn those painful lessons first-hand, but yet I want each of them to get that visceral understanding of how important this is."
--Andrew Cushman, director, Microsoft Security Response Center

As threats became less a random crisis and more a fact of life, Nash realized the security team needed its own space. "It used to be we'd take over a conference room and people would say, 'We need the conference room,' and we'd say 'No, we need the conference room.'"

Although communications is an important part of emergency response, Nash decided it was necessary to build two "war rooms," allowing the engineering team to brainstorm separately from the employees who were communicating with customers and the media. The Microsoft Security Response Center was completed in June 2005. A door connects the two rooms, making it easier for people in both areas to get together when need be.

Microsoft also began to realize that it simply couldn't afford for everyone in the company to learn security lessons the hard way. It needed more people to get exposure to the threats that were out there.

The need for more dialogue with the security community prompted Snyder to suggest the idea for Blue Hat, an internal Microsoft conference where hackers would present in front of the company's engineers. The idea was controversial at first, with not everyone thinking it was such a good idea to put Microsoft's engineers face to face with the folks they blamed for many of their headaches.

Among those initially opposed was then-chief of Windows Jim Allchin, who didn't like the idea of having to sit face to face with the people who poked holes in the products his team created. Nash recalls Allchin saying to him, "Let me get this straight, you want the people hacking us and telling us the problems in our system and you want me to listen to them." Yes, Nash said, that's exactly what we want you to do.

Uncertainty among outside researchers
The outsider researchers, too, were skeptical about Microsoft's motives and commitments.

But, it proved to be a hit in both camps. Blue Hat, which was first held in March 2005, is now a twice yearly event, with the most recent one taking place over a two-day period in September. As usual, Microsoft's engineers were confronted by hackers showing a range of techniques that can be used to attack Microsoft's products. In perhaps its most confrontational invitation of the year, Microsoft invited the team from WabiSabiLabi, a group that operates an auction site where people can bid on vulnerabilities, much like collectors bid for trinkets on eBay. The enterprise is happy to sell to vendors who can patch the hole, or to people who might have other purposes in mind.

Although the presence of WabiSabiLabi's Roberto Preatoni at Blue Hat in September was unnerving, Cushman says that it's important for Microsoft's engineers to understand the current threats. (In November, Preatoni was arrested in Italy in connection with a spying investigation at Telecom Italia, where he previously worked.)

"It's very important when we build an update that it won't break anything."
--Adam Shostack, senior program manager, Microsoft

"I don't want every team to have to learn those painful lessons first-hand, but yet I want each of them to get that visceral understanding of how important this is," Cushman said. And there's nothing like having a hacker come in, he said, before correcting himself, "having a security researcher come in and demonstrate vulnerabilities in your product to bring that lesson home."

More recently, the company has started an exercise called "Defend the Flag," in which IT pros and security newbies get a day of training on setting up a Windows network before having to build and protect one themselves.

"If the network you've set up and are defending (gets) compromised because of misconfiguration or some vulnerability, you are going to remember that," Cushman said.

While many of the lessons surround ways that Microsoft needed to do more or move faster, one of the strongest lessons for Shostack was a story he heard at a previous Blue Hat about the need to proceed with caution. At the event, a colleague talked about Microsoft trying to prevent bitmap art exploits by more narrowly defining what could be in such a file. In trying to shore up security, Microsoft had also broken some files, which meant that companies that used a bit-mapped logo in their invoices couldn't print bills.

"Once a system administrator has gone though that experience, they become much more hesitant to patch," Shostack said. "It's very important when we build an update that it won't break anything."