X

How to floss your security system

Diana Kelley of Computer Associates says that managing security patches is like flossing teeth--and a lot harder to pull off than it sounds.

4 min read
Patch management is a little like flossing your teeth. Everyone knows they're supposed to do it, but most of us still don't. Some pundits say the simple answer for patching lies in proactivity. Get the patch applied before an incident occurs, and keep the problem from occurring rather than fixing it after the fact. That's a simple truth, but in practice, it's a lot harder to pull off than it sounds. It also contradicts the way security is usually addressed.


Get Up to Speed on...
Enterprise security
Get the latest headlines and
company-specific news in our
expanded GUTS section.


Unfortunately, despite all the hype around being proactive and prepared, especially after Sept. 11, 2001, the reality remains that a majority of security fixes are done retroactively, after an incident has occurred.

One problem is that being proactive often gets confused with being fully automated. This is risky, because they're two very different concepts.

While there is much to recommend with regards to automating portions of the patch process, there are also compelling reasons to support manual intervention as a component of the work flow. There's no doubt that administrators are drowning in a flood of daily threat warnings and patch updates and have valid reasons for not applying every patch immediately.

Too many have been burned by server farms going dark with a collective "blue screen of death" after applying a buggy service pack and are, quite reasonably, skittish about automatically slapping the latest patches on their production servers. Complicating matters are the vendors themselves. Many release vulnerability warnings concurrently with the patch fixes, escalating the urgency of the patch cycle. Yet the patches themselves are often not fully tested and can result in more problems--such as patches that delete critical third-party agents--rather than fewer.

The reality remains that a majority of security fixes are done retroactively, after an incident has occurred.

The result is that the industry is between a rock and a hard place on the patch issue. Case in point: Six months before SQL Slammer hit companies such as Bank of America and Washington Mutual and brought portions of their automatic teller machine networks to their knees, Microsoft had released a vulnerability warning and a patch. Why hadn't those organizations applied the patch? Were their administrators asleep at the wheel? Far from it. What they need is focused intelligence about which patches to apply--and when.

And the need for disseminating this intelligence quickly, supported by recommended action steps, or automation where possible, is a must. Slammer had a six-month lead time, but Slammer, which affected Air Canada and the New York Times, among others, had only a 28-day lead time. It is not fear mongering to think ahead to exploits that can wreak havoc on systems that are discovered only six hours, or even 28 minutes, before the attacks ensue.

Certainly, being proactive is one of the components of the solution. Yes, we do, as an industry, need to take the time to "floss." What does that mean though, in practice? First and foremost, it means taking preventative measures that surround and support the patch management efforts.

Patches alone are not the ultimate solution. Harden servers where needed; turn off unnecessary services; build access control into the network, servers and applications themselves. For patch management, services and tools that fit into the overall system and network management solution--not just that stay siloed in security--work more effectively.

Part of the reason the industry is in reactive mode so much of the time is that security is not seen as critical to the overall business profitability.
Part of being proactive is knowing when something doesn't need to get done and when a patch requires immediate attention. Without a view to the overall systems, this point can be blurred. For example, a production server that's accessible on the Internet may need to be patched immediately, while an internal server behind an intranet firewall and accessible only to trusted users might be able to sustain a lag time in the patch process.

Finally, as Mel Brooks said, "Hope for the best, expect the worst," and have a recovery plan in place. Sometimes reacting after the fact is essential, none of us are soothsayers, and even the most well protected and patched systems may ultimately be attacked. So be ready with a plan for when that happens; the ability to recover from a critical failure is a part of the overall security posture. The truth is that patching and protecting proactively will reduce vulnerability, but being prepared for the inevitable reactive patching and recovery is essential as well.

Part of the reason the industry is in reactive mode so much of the time is that security is not seen as critical to the overall business profitability. This is a dangerous approach and leads to vulnerability. Arguably, no business can run if it's not secure. But until upper management buys into this viewpoint, security will remain a secondary consideration.