I was casually cruising the news sites yesterday when I came across a story about porting "ineligible" numbers to AT&T and iPhone. I clicked on the story because I know some of the people who lobbied for and won the rights to treat phone numbers more like personal property you own than corporate property you rent. I was right with the author until he said (without comment or outrage):
On that screen, enter your name, Social Security number, and your current billing information and home telephone.
Say WHAT!?
Apple and AT&T are demanding customers reveal SSNs to activate their iPhones. That should be the lead of every technology and business article written this week. If you don't believe me, read on.

This weekend, 525,000 people, my wife, Amy, included, purchased the Apple iPhone. Those who purchased the phone via the Web, like Amy, were given a place in a virtual waiting line two to four weeks long, with a lengthy "homework assignment" to pass the time. Those who braved the crowds to purchase the phone at retail stores were rewarded with a form of instant gratification--the opportunity to activate their iPhone using an online activation mechanism that requires subscribers to enter a Social Security number (SSN). While Amy was at first disappointed to wait, I've convinced her that she got the far better deal. During this two- to four-week "cooling off period" she at least has some time to consider how best to protect herself from a consumer protection disaster in the making.
It is well known that Apple is a very secretive company. This does not necessarily mean that it handles personal data more responsibly than a very transparent company, it just means that it's very difficult for an average person like me to discover the truth about what it is doing and what it is hiding. But AT&T? The company is a defendant in a class-action lawsuit after a federal judge denied AT&T's motions to have the case dismissed. The case alleges that AT&T gave the NSA "unchecked backdoor access to its communications network and its record databases," violating the law and the privacy of its customers. Whatever the court may find, the AT&T case clearly demonstrates why it is profoundly bad judgment to give a telephone company (or most any other company) sensitive personal identifying information such as one's SSN. Period.
Before writing me off as a privacy kook, consider this testimony from 1992 by the group Computer Professionals for Social Responsibility (CPSR) before the Special Joint Subcommittee Studying State and Commercial Use of Social Security Numbers for Transactional Identification. According to testimony, "[until] 1972, each card issued was emblazoned with the phrase 'Not to be used for ID purposes.'" It cited a report by the U.S. Department of Health, Education, and Welfare that recommended, in unqualified terms, that the SSN not be used as an identifier (bold text in the original document):
We recommend against the adoption of any nationwide, standard, personal identification format, with or without the SSN, that would enhance the likelihood of arbitrary or uncontrolled linkage of records about people, particularly between government or government-supported automated personal data systems.
This advice was not followed, and by 1992 the CPSR reported the dismal facts: "Unfortunately, [the Federal Privacy Act of 1974] has not been effective due to bureaucratic resistance from inside the government, lack of an effective oversight mechanism, and the uncontrolled use of the SSN in the private sector." When states like California, New York, Virginia and others passed legislation in the mid-1990s requiring the collection of an applicant's SSN to issue a driver's license, they effectively flattened 60 years of privacy protection, and they effectively exposed every citizen to a degree of identity risk that was, and remains, unconscionable.
And so what has been the legacy of the government ignoring its own advice and the advice of leading computer experts? Precisely what the CPSR predicted: identity theft is now the most prevalent complaint received by the FTC, and it's America's fastest-growing crime. Unlike a video game that just eats your quarter and says "GAME OVER," a stolen identity can ruin your credit score, drain your bank account, endow you with a lengthy criminal record, or grant you an entry on the no-fly list. More troubling, identity theft can be a one-way ticket to a world in which the bits on some agent's computer screen matter more than your own testimony, a world in which the term habeas corpus is a lexical artifact rather than a constitutional guarantee, a world in which your physical self can be suborned based on what is believed about your virtual self.
On December 18, 2006, Tom Zeller reported "An Ominous Milestone: 100 Million Data Leaks" in the Technology section of The New York Times. The number of confirmed victims is at least 15 million. The cost is estimated at more than $50 billion a year. In health care terms, we have more than 100 million "exposed," 15 million "affected," and a cost of, well, more than $50 billion. How did we get here? And what are we going to do about this virtual epidemic?
Identity theft is not a new crime, but the combination of corporate opportunism and governmental policies designed to amplify rather than mitigate the risks have conspired to create a near-perfect storm. In simple terms, the more of our lives we commit to technology, the larger and more vulnerable a target we make ourselves to technical exploitation, including identity theft. Don't get me wrong: there are some computer-based technologies that allow for far better security than any other methods I know, but security is only as strong as its weakest link, and the more links you involve, especially the more parties you involve, the weaker things get. Conversely, the fewer keys you use, the more dependent you become on the strength of each key. Some keys (like the launch codes for our strategic nuclear missles) are very well-protected indeed. But if a key is weak, or is not particularly well-protected, you don't want to risk much if it fails.
The security records of many companies are dismal. We don't actually know how bad they are, because most companies don't even report breaches to themselves, let alone to the government or their customers. Don't ask, don't tell. But we get a glimpse every now and again, and frankly the best way to protect oneself is to use the least possible personal information to complete a transaction, favoring companies that request less personal information over those who demand too much. (Another approach to minimizing the problem is to merely deny its severity. For example, when the news broke that 26.5 million personal records of the U.S. Department of Veterans Affairs went missing, Avivah Litan, a security analyst for the Gartner Group, argued that the problem was not very serious because "Frankly, veterans don't have a lot of money." Frankly, I don't find that line of reasoning particularly compelling.)
And it gets worse. Individuals who can be victimized by their own data can also become collective victims of those with whom they are associated. As Bruce Schneier wrote for Wired magazine:
Contrary to decades of denials, the U.S. Census Bureau used individual records to round up Japanese-Americans during World War II.
The Census Bureau normally is prohibited by law from revealing data that could be linked to specific individuals; the law exists to encourage people to answer census questions accurately and without fear. And while the Second War Powers Act of 1942 temporarily suspended that protection in order to locate Japanese-Americans, the Census Bureau had maintained that it only provided general information about neighborhoods.
New research proves they were lying.
The whole incident serves as a poignant illustration of one of the thorniest problems of the information age: data collected for one purpose and then used for another, or "data reuse."
It is bad enough that the government might collect data for one (lawful) purpose and then use it for another (nefarious) purpose, but what happens when all data is keyed by a single key, such as a Social Security number (SSN), which itself was never designed for the purpose of personal identification? And what happens when that number is leaked (100 million instances and counting) or stolen (15 million instances and counting)? The opportunities for abuse, both within and outside the system become virtually limitless. (And legislation passed in 2005 has only served to accelerate both the breadth and depth of these opportunities.)
Which is why the iPhone activation mechanism is so troubling, because it compels people in the heat of the moment to do something they should never do if given a moment's thought. Now, I'm sure that it's possible to get a phone activated without giving up one's SSN. I did it with my carrier several years ago by walking the issue up to a VP's desk and posting a $1,000 bond for two years. So it can be done. But should it be so hard? And how are we going to teach our children the importance of protecting personal information when the laws of the state and mainstream corporate behavior make it virtually impossible to do so?
The only solution I can see is that our family will have to dramatically expand the lesson of "you are responsible for you" beyond the basics of verbal and physical conduct. If you have any good references on how to teach your third-grader the ins and outs of identity management and information security, I'd be happy to receive them now. In the meantime, we'll let you know whether we find a way to activate Amy's new iPhone without handing over sensitive personal information to a company that has demonstrated no respect for personal privacy or identifying data.