Microsoft FUDwatch: Windows vs. Linux security
Microsoft is at it again, this time focusing on Linux's alleged security inferiority to Windows.
It's been at least a week since the last bout of Microsoft FUD hit the wires, so I guess it was time for a new wave. Today's FUD comes from an article Microsoft released on how its security compares with that of Linux. It should come as no surprise that Windows comes off as the Second Coming while Linux is left on the wrong side of Acheron.
It's amusing to watch Microsoft attempt to claim the moral high ground with security. Pat Edmonds, Senior Product Manager for Microsoft, writes that the "many eyes makes all bugs shallow" aspect of open source doesn't work for security, and points to several studies that purportedly confirm that Windows is more secure than Linux:
In reality, the "many eyes" mantra for Linux security has largely been disproved for two primary reasons. First, it assumes that all of the "eyes" are qualified to know what they are looking for. In reality, security expertise is not widely distributed across most users, but is actually a fairly rare and valued skill set. [Mr. Edmonds should know, as this skillset has been sorely lacking at Microsoft for decades.]
Second, the "many eyes" argument implies that all the "eyes" want to voluntarily peruse code for bugs. Actually, debugging and testing code is not necessarily one of the more exciting pastimes for many volunteer developers, who more often than not would rather devote their spare time to creating the next great application. As a result, it is not surprising that Ben Laurie, Director of Security at the Apache Foundation, stated, that "although it's still often used as an argument, it seems quite clear to me that the 'many eyes' argument, when applied to security, is not true."...
Microsoft is adept at twisting the truth about how open source works. Every person in that company knows or should know by now that significant commercial interests are involved in open-source development, and especially Linux. So when Mr. Edmonds refers to "volunteer developers," he's surely creating a false strawman (just as Bill Hilf recently did).
Not content with this minor indiscretion, Mr. Edmonds quotes Ben Laurie and tries to use his words against him, to which Mr. Laurie replies:
...[F]ocusing on the "many eyes" fallacy fails to capture an important difference between open and closed source: namely that if I want to do a security review of an open source product, I can. For Microsoft's products I would have to (potentially illegally) reverse engineer them before I could even start.
Secondly, the fact that more bugs are found in an open source product than a closed source one is not, in itself, an indicator that more bugs exist - or even are known. It is equally plausible that the availability of the source encourages a more collaborative approach to security, so that those few who do search for bugs are more inclined to report them than to exploit them. It is also the case that, since open source products cannot conceal their security fixes, they are more inclined to make them public, even if they had no need to....
Thirdly, the study on which they rest their conclusion is comparing apples and oranges. From the reportFor each operating system, Secunia tracks all vulnerabilities that affect a full installation of all components and packages included in the current release.
A full release of Windows is far less functional than a full release of Red Hat. Windows will only include the base operating system, whereas RH will include pretty much every open source project you've ever heard of. So, simply counting vulnerabilities in a full install is highly biased. A fairer comparison would be to look at an install of RH with equivalent functionality. Presumably that doesn?t cast Windows in such a favourable light, or they would have done it.
Finally, their study shows that Windows actually had more bugs classified as "highly critical" than RH. 5 for Windows versus 2 for RHES 4 and 1 for RHES 3. I would say this makes the conclusion of even this biased study more than a little suspect.
Boiled down, Microsoft is effectively saying, "Trust us to help you be secure" and open source responds, "Trust us, but also trust yourself." Open source doesn't force its adopters to give up security to the hands of a vendor, though there are certainly open-source vendors who are happy to enhance security and stand behind it for a fee.
Microsoft, for its part, clearly views itself as an island: a fortress that can take care of all its customer needs, including interoperability:
Interoperability by design is a key element that is enabled through the Microsoft development model. By taking into account the interoperability needs of Microsoft?s broad customer base, which includes the need to exchange data with software and hardware from more than 100,000 other companies, during the design phase Microsoft can implement appropriate standards and leverage relationships with other vendors to ease the burden on customers who need to integrate Microsoft products with software from other vendors including open source.
Microsoft's model is, "Trust us to take care of everything. We're a nearly omnipotent gatekeeper." In some ways, this is true. Microsoft has a lot of engineers and a lot of experience with interoperability.
But consider the open-source alternative: while vendors like Red Hat, Canonical, and Novell will take care of the most important interoperability points, the community is able to add on its own such that there is no single point of failure. For example, internationalization of products tends to happen much, much faster in open source than in proprietary products. Why? Because you're not waiting for those 10 smart developers within Microsoft tasked with internationalization to get to your preferred language, which might be German or it might be Swahili.
Also, as I read through Mr. Edmonds article, I got the sense that the basic model is always "Trust us to bake security into the product." But this overlooks the biggest problem: no system is necessarily perfectly secure from the start, so what happens after the code release is often as important, if not more so, than what happens before.
With a proprietary product you entrust all security to the vendor. That may work most of the time. But for those times when it doesn't...well, you're worse than on your own. You're on your own without the legal right to help yourself. That doesn't sound like much of a security proposition to me.