Wrapping up Speeds and Feeds, part 4: Security

PCs and the Internet provide only fragmented support for secure storage and communications. Technology and standards are available to provide security by default. What are we waiting for?

Peter Glaskowsky
Peter N. Glaskowsky is a computer architect in Silicon Valley and a technology analyst for the Envisioneering Group. He has designed chip- and board-level products in the defense and computer industries, managed design teams, and served as editor in chief of the industry newsletter "Microprocessor Report." He is a member of the CNET Blog Network and is not an employee of CNET. Disclosure.
Peter Glaskowsky
6 min read

Nothing disappoints me more about the evolution of the personal computer than the PC's lack of ubiquitous security.

There's no technical reason why PCs can't provide strong security. Improving security costs money, which provides a business reason not to do it, but the way I see it, the costs associated with insecure computing have long since eclipsed the costs of making systems more secure.

It's also true that there's always a way around any layer of protection, which is sometimes taken as another argument against improving security. As the argument goes, you have to be able to access your own data; if someone else wants access, they can always force you to get it for them.

But that's like saying that because anyone can force you to unlock your front door, you shouldn't have a lock on it.

The right answer, I think, is to seek the point at which the security of a system establishes a balance between the costs and inconveniences of providing the security and the risks of having the security violated. In my opinion, the PC is nowhere near that point.

We need several key security improvements in the personal-computing experience:

Secure storage
To my way of thinking, security starts with secure storage. I assume most of us have sensitive information on our PCs. Since PCs can be stolen or attacked while nobody's watching, we need a way to protect our information. "Storage" in this context can include hard drives, the PC's main memory, and even removable media like USB drives and DVD-ROMs.

Properly done, storage security can be almost invisible. It shouldn't take much more than entering a password to unlock the storage device; for extra security, you could be required to use some kind of security token. But once you're in, and as long as you remain physically present, your machine can operate normally.

The same weaknesses that contribute to unreliability (see my earlier post, "Wrapping up Speeds and Feeds, part 2: Reliability") make PC storage insecure. Recent history shows how vulnerable PCs are to malware. Once a malicious program is in your machine, it can find personal data in memory or on disk and send it over the Internet to the attacker. Reliable execution can be associated with secure execution, and that's a good thing too.

Hardware can create security holes, too. The IEEE 1394 peripheral interface (also known as FireWire and i.Link) is a notorious weakness. It can provide unlimited access to system memory and, indirectly, all connected storage devices, even those configured with full-disk encryption.

Strong process and object isolation--the same techniques I recommended to improve reliability--can help improve storage security, too. These methods apply directly to memory security, and by extension, to mass storage.

Secure communication
Because most of the data on our PCs arrives there from somewhere else, communications security is also important. I remember being disappointed in the late 1980s that emerging Internet e-mail standards did not allow for secure e-mail, but I assumed that this omission would be quickly rectified.

When Phil Zimmerman's Pretty Good Privacy arrived a few years later, I figured it was only a matter of time before all Internet e-mail was encrypted by default.

But some of the critical technology, notably the RSA public-key cryptography algorithm, was patented and not really available at consumer-friendly price points. When the RSA patent was released to the public domain in 2000, I figured the end of insecure e-mail was finally in sight.

But here we are, eight years later, still waiting.

It wouldn't take much for someone to introduce a mainstream e-mail service that is secure by default. Apple, for example, could provide almost invisible security for MobileMe e-mail using nothing more than the existing open standards created for that purpose. Any e-mail provider could do the same thing. What are they waiting for?

In fact, there's no longer any technical or commercial barrier to cryptographic protection of all of our Internet communications. Every Web server could provide HTTPS support in preference to standard HTTP, but very few allow this. Almost every insecure Internet protocol has a secure alternative, but most of these are not well-supported.

This lack of security is quite serious and quite expensive. Many credit card theft rings have intercepted card numbers being transmitted over Wi-Fi networks. Many individuals have fallen victim to identity theft because someone intercepted their traffic on public Wi-Fi networks.

There are ways for individuals to protect their Internet communications. One is to use VPN (virtual private network) software, which is built into most PC operating systems these days. Until consumer ISPs provide VPN endpoints for their customers to use when away from home, however, this option is mostly limited to business users. Also, a VPN only protects traffic between you and the other end of the VPN connection; from there to whatever Web sites or other services you access, your connections are not covered by the VPN.

Secure identification
Many sites on the Internet require some form of log-in before giving access to personal information. This process is separate from the communication method itself.

HTTPS, for example, doesn't require any kind of user identification; it just protects a single session. VPNs protect the link from the user's machine to some remote site, but in themselves don't usually give access to systems at that site.

Ideally, the remote system should be convinced who the user is, the user should be convinced what system is being accessed, and the whole process should be strongly secured by open industry standards.

Alas, that isn't how it works.

Most Web sites use their own authentication systems, requiring users to keep track of a separate set of log-in credentials for every secure site they visit. Although there are a few open standards for this purpose such as OpenID, they are nowhere near universal.

Few Web sites provide any way for the user to authenticate the site itself. The Extended Validation Certificates offered by some certificate authorities help a lot, though they are relatively expensive and not easy to get. Modern Web browsers recognize these certificates and turn the address bar green to indicate that the site certificate matches the displayed address.

These certificates still don't provide a direct negotiation between the user and the server based on some previous agreement, however, so there are still some risks involved, such as users mistyping domain names and getting a site masquerading as the one they intended to reach, or having the server taken over by malware.

While it's entirely appropriate for many servers to know exactly who their users are, I also think there are times when users should be entitled to some privacy. Just as there are multiple levels of identification, there should be multiple levels of anonymity.

The details of this option can get a little tricky. I think it ought to be possible to have a Web site for government oversight, for example, where whistleblowers can participate with almost complete anonymity. Of course, such a site could become a magnet for libel, and that wouldn't be useful.

A more practical kind of anonymity is already practiced by many Web sites, where user credentials are accepted uncritically but access logs can still be used to track down the IP addresses of users who violate the site's terms of service (or the law). This is fine, as far as it goes, but it isn't really secure anonymity. It can be fairly easy to associate an IP address with a name depending on the user's other online habits.

There are anonymizing services available online that can act as go-betweens to protect against this kind of investigation, but these services can also provide cover for libel, and again, that isn't very useful.

I think there's room for a new open standard to anonymize Internet communications in a way that is secure against casual investigations yet fully accountable if abused.

Security is a big topic, of course, and I've really just scratched the surface here. (Not to mention the risk of oversimplifying some important issues.) Suffice it to say that there's plenty of room to make personal computing far more secure, and that this improvement is, in my opinion, long overdue.