The security problem with shared-source software
Letting someone watch themselves die and ensuring they're impotent to stop it is a bad strategy....
In reading through a larger article on open-source adoption in the US Department of Defense, I came across this interesting perspective on why shared-source software (which Microsoft and an increasing number of software vendors use to mimic open source without fully embracing its benefits and obligations) is bad for security:
Several large companies whose software is in heavy use in DOD advocate a shared source code model in which people can view the source code but not change it. This shared source code approach has some problems, though. By sharing source code with organizations, the users have the ability to find flaws in the software. However, because they are not able to fix code security flaws, unscrupulous organizations may use access to source code to develop software that exploits the bugs. This shared source code approach potentially contributes to the rise in zero-day exploits in a number of commercial products. The best approach for truly secure systems is transparency--release the software as open source because security by obscurity rarely works well.
In other words, letting people in without providing a way for them to get themselves out (of a security exploit or whatever) is a recipe for frustration and potentially disaster. It's like tying the customer's hands so that they can see how they'll be hit but not allowing them to raise their hands to defend themselves.
Shared source may be comfortable for vendors, but it's bad for customers.