Open-source misperceptions live on
An Election Technology Council white paper reminds us that there are still a lot of misunderstandings about open source outside of a tech-savvy audience.
The enterprises, vendors, developers, analysts, and journalists I speak with regularly are mostly pretty savvy about the basics of open source at this point. Even if they're not licensing geeks or otherwise expert in all the minutiae and subtle implications of open-source development, community, and usage, they generally have the important basics down.
However, as I read Rice University computer science professor Dan Wallach's detailed and thorough critique of an Election Technology Council white paper on open source in voting systems, I'm reminded that there are still a lot of misperceptions about open source among the broader public.
A lot of people are quick to dismiss such confusion as evidence of sinister plots to deliberately undermine open source. No doubt, some is indeed willful misunderstanding, or simply disinformation.
But certainly not all. I've personally heard longtime friends from outside the tech industry voice many of the same beliefs. Many of these fall into one of two buckets: open-source software depends on the support of some nebulous "community," and open-source is less secure because bad guys can use the code to find vulnerabilities.
With respect to the first point, this discussion in the ETC paper is typical:
Due to the volunteer nature of an open source model, the issue of accountability in this environment provides a stark contrast to the accountability within a traditional proprietary offering. In commercial product offering, the individual company is held accountable for delivering a product that meets all applicable standards and for meeting project milestones. Contract requirements are often used to establish performance milestones and clearly delineate the responsibilities of a provider. Within a corporate structure, liability is clearly delineated to the company. In an open source environment, a volunteer group of collaborators will not be so clearly subject to financial liability or have a clear line of accountability. It is possible that a hybrid approach could be undertaken for an open source project which is launched in partnership with a private company, but the issue of intellectual property investment and concerns over the long-term viability of the company's product will likely trigger a need to adopt a more restrictive licensing approach, one more indicative of a traditional proprietary model.
In part, this is a case of conflating open source of circa 1997 or so with open source of 2009. It also reflects that most of the people who are loudest about open source as a social movement emphasize hobbyist communities rather than corporate sponsorship and in-house professional development. Indeed, these people often decry the latter as a betrayal of free-software principles.
The reality for most commercially important projects is much different. The bulk of the development is directly funded by IT vendors for self-interested reasons. In the case of the Linux kernel, the work is shared by a large number of companies. Other software, such as JBoss and MySQL, are primarily worked on by developers at a single company.
Support for major open-source software is similarly commercialized. Although "community support" (that is, forums, blogs, Twitter, etc.) may often in fact be pretty good mechanisms to track down fixes, they're not the only one. A Red Hat Enterprise Linux customer, for example, can get support in exactly the same way that someone with a support contract for Microsoft Windows would.
As for the security question, I look at it largely as a case of analogies gone bad. Typical is this line from a paper on Linux security by a public policy expert that I was asked to review a few years back: "it would seem to stand to reason that if keeping passwords, access numbers, and other aspects of system security secret, keeping code secret might be one way to enhance security."
It all sounds reasonable. After all, even if I think I have a solid security system installed in my office I don't stick its specifications and layout on a piece of paper and nail it to my front door.
However, as Wallach notes:
What we learned from the California Top-to-Bottom Review and the Ohio EVEREST study was that, indeed, these systems are unquestionably and unconscionably insecure. The authors of those reports (including yours truly) read the source code, which certainly made it easier to identify just how bad these systems were, but it's fallacious to assume that a prospective attacker, lacking the source code and even lacking our reports, is somehow any less able to identify and exploit the flaws. The wide diversity of security flaws exploited on a regular basis in Microsoft Windows completely undercuts the ETC paper's argument. The bad guys who build these attacks have no access to Windows's source code, but they don't need it. With common debugging tools (as well as customized attacking tools), they can tease apart the operation of the compiled, executable binary applications and engineer all sorts of malware.
In short, access to software (whether open source or just disclosed source) is not the unlocked door that many non-developers assume it is.
Cryptography provides another example. The mechanisms used by most protocols to encrypt and decrypt data are well-documented. They don't depend on secrecy to work. They depend on an untenable amount of computational power being required to decrypt in the absence of private keys.
And the converse offers further proof. Weak encryption algorithms that depend largely on secrecy are often compromised relatively easily; the Content Scramble System (CSS) used on DVDs is a textbook example.
I take all this as a reminder: don't assume things that "everyone knows" really are.