The Daze of Risk
Using "days of risk" as a measure of security may make sense from a practical perspective, but has the danger of rewarding bad conduct on the part of software vendors.
On Wednesday, two researchers reportedly previewed findings at the RSA Conference showing that a Windows-based Web server is more "secure" than a Linux-based Web server.
The researchers who presented at the RSA Security Conference reportedly found that the components of the Windows Server 2003 installation had 30 total days of risk, while a Red Hat-based Web server had 71 days of risk.
While I haven't seen their paper, which is due out in a month, it is not the first time that "days of risk"--a measure of the number of days between the public outing of a vulnerability and when the patch arrives--has been equated with security. A Forrester report published last year found that, while various Linux distributions led in each of three practical components of security (one being days of risk), only Microsoft's Windows topped all three lists.
However, the days of risk measurement represents a practical factor in security operations, not the true secureness of an operating system. Days of risk is supposed to represent the time of peak danger for companies. In reality, it is a measurement that favors commercial vendors', and especially Microsoft's, approach to the disclosure of vulnerabilities: That flaws should only be disclosed when a patch is available.
The open-source world does not work the same way: Generally, flaws are publicly outed and fixed fast. But while every day that an open-source project spends to fix a vulnerability is added onto the days of risk, Microsoft's ability to keep holes secret means that its developers don't operate under the same penalties. The result is that quickly fixing flaws is not necessarily the top priority for Microsoft.
In the end, Microsoft Windows could be more secure than Linux, but the days of risk should not be the litmus test.