Taking the classical approach to security
RSA chief scientist Ari Juels explains why classical literature has a place in IT security, and what to make of security in radio frequency identification.
Ari Juels' fascination with numbers is the stuff of fiction, literally.
The chief scientist and director of RSA Laboratories recently completed a novel in which the protagonist is hired by the U.S. government to counter the efforts of Pythagoreans, a Greek group that believed in the supremacy of numbers--subscribing to the notion that by mastering numbers, one could understand and control the forces of the universe.
That concept, he told ZDNet Asia during a recent visit to Singapore, had been "a little silly" until cryptography developed to a stage where "mastery of certain mathematical problems could in principle lead to considerable power over computing resources and consequently over our lives."
The book, which will be launched at the RSA Conference 2009 in San Francisco in April, was in essence, the coming together of two of Juels' interests--computer security and classical literature. He graduated from Amherst College in 1991 with degrees in Latin Literature and Mathematics.
Thirty-eight-year-old Juels, who joined RSA in 1996, shed some light on recent RFID (radio frequency identification) issues in e-passports, identity documents, and transport-related systems, as well as how to balance security and privacy.
Q: What are you currently working on?
Juels: With the acquisition of RSA by EMC, we've turned our attention to some of the special security problems that storage systems present. In particular, we've looked at...the ability of a client to verify that a file that is stored on remote servers is still there--intact. We've been able to develop a protocol which accomplishes the seemingly paradoxical property of enabling a client to verify that a file is completely intact--that every bit is there, not a single bit has been changed--without downloading the file. In fact, the archiving service can send a very short proof--some tens of bytes--and that's enough for the client to establish that the file is completely retrievable. That's been a major area of research for us.
Is there a name for this concept?
Juels: There've been several names. I guess the most recent is an acronym called HAIL, for High Availability and Integrity Layer.
Does HAIL appeal to a specific industry or user?
Juels: Our feeling is that it will support storage services in the cloud. Online storage is becoming more prevalent and consequently, people have less control of knowledge of where their data is stored, so it makes sense for them to want some technical assurance that their data is still there. You store a file on your hard drive and you maintain the hard drive yourself--you've got at least some physical assurance of the integrity of the data you are storing. If you store it in a cloud you have no idea whether it's in California or Greenland and what organization...is responsible for protecting and administering the systems that store the data.
Wouldn't the online storage providers already boast that kind of capability? They do presumably provide some sort of contractual assurance. But ultimately a client or customer is just relying on the reputation and contractual obligations of the organization that is storing the files. Different organizations have different ways of maintaining file integrity--they have varying backup policies and their systems are scripted in different ways...so ultimately the consumer probably knows very little about the physical media and the administration of those media on which the files are stored.
Would the technology be specific to EMC, or would it also apply to other online storage providers out there?
Juels: That we haven't figured out--(the technology) is still in the lab. We envision an economy in which storage becomes a tradable resource--a fungible resource--like electricity or water, in which case this tool can be used to test the quality and basic assurances of the resource.
EMC may one day be in the business of administering cloud storage; in fact it's just launched a cloud storage product--you can use this tool to provide internal integrity assurances, so that it is able to make stronger assertions to its customers or it could enable its customers to check directly their storage files are intact. Besides the fact that we haven't worked out the appropriate business models, there's several places from which we can come into play.
What is it that keeps you going when you don't feel like it?
Juels: (Security) continues to be intellectually engaging to me. Security and cryptography ramifies other disciplines (PDF)...I recently worked on a paper studying the security of a new identity document initiative in the United States called the passport card, which is also influencing the design of driver's licenses. With colleagues at the University of Washington, we looked at the security of the RFID chip in these identity documents and found that the chips could be cloned. We found that to understand the security of the border crossing system as a whole, we really needed to understand the psychology of security, not just the technical facets of the card design. So we're looking at the literature and psychology to understand various phenomena like vigilance and decrement, which basically describes the common human behavior of relaxing vigilance when a threat is not emergent after a long period of time.
Cryptography is always interesting. Even if it became uninteresting, there are many intersections between security and other disciplines that my curiosity is perpetually self-renewing.
What were the conclusions of the paper about those identity documents for land and sea crossings?
Juels: We felt that a system in which human beings demonstrated ideal behavior, and in which you have perfectly vigilant border crossing agents, the fact a card could be cloned might not be so serious. But because of the limitations in human behavior and vigilance that we anticipated, our feeling is that the card was probably not well-designed.
When you combine the technology and anticipated behavior of users of the system, then flaws become apparent. If you just take the technology in isolation, you might conclude that the system was just thoroughly broken because the cards can be cloned, but if you look at the system from a broader perspective you realize the security of the system really depends on human behavior and because of some common psychological phenomena, there are real causes for concern for the system.
You once called RFID tags the ants of the computing world. Why? They are numerous and individually not very powerful. Ants actually constitute the largest biomass in the world, and similarly RFID tags will become the most pervasive form of computing--at least in terms of sheer numbers, not in terms of computing power--they will be everywhere. Ants don't look like much, but collectively they weigh more than any other organism in the world.
With the rise of e-passports, machine-to-machine communication and the introduction of mobile payment, what implications will that have on RFID security? RFID denotes a spectrum of devices...it's important to choose the right devices and protocols. RFID tags pose a special challenge in that they are very resource-constrained--always attempting to cut corners. It makes sense from a business perspective not to endow them with computing capabilities that are unnecessary, but at the same time, one has to be cognizant of the security requirements that they need to meet.
What's the least secure thing you've ever done or come across?
Juels: We've examined vulnerabilities in a few different systems, I'm not sure I can say that one was definitively less secure than others. But I think that the industry practice that has led to the most serious security lapses has been the practice of security through obscurity...a term that cryptographers use. The basic principles of contemporary cryptography were enunciated in the 19th century by a [person] named (Auguste) Kerckhoffs, a French military cryptographer. Its most important principle was that the secrecy of the design of a cryptographic algorithm should not be relied upon. What should be relied upon in a good well-designed system is the secrecy of the key.
For instance, the (design of the) RSA algorithm, the Advanced Encryption Standard or other basic cryptographic algorithms that are used in the industry [are known to] everyone--there's no secrecy about that. It's only the individual keys--the keys of particular users--that are kept secret. And that's an important design principle.
We illustrated this for instance in our analysis of an RFID device manufactured by Texas Instruments in the U.S.--this was a device that was used in a payment system called Speedpass, and also used to secure tens of millions of automobiles. Texas Instruments used too short a key--a 40-bit key--one that could be cracked by brute force in a few hours of computing time or in fact with a well-designed system with some pre-computing, a couple of minutes. They tried to conceal this fact by not releasing the design of their algorithm. They violated Kerckhoffs' principle by keeping the cipher secret. The reason why Kerckhoffs enunciated this principle was because in a system that's widely fielded, it's relatively easy to learn to unravel the design of the cipher, and that's exactly what we did. (With) colleagues from John Hopkins University, we reverse-engineered the cipher, and because the keys were too short, we were able to clone this device.
And there have been other systems, most notably of late, the Mifare system. The problem was exactly the same--they had a short key and they tried to conceal the cipher design.
When we attacked the Texas Instruments device, we didn't actually examine the hardware, we were able to do what is called a black box analysis. We had some clues from some schematics they published., a couple of graduate students actually shaved down the chip and looked at the IC (integrated circuit) and managed to figure out how the cipher was designed.
I would say that the most serious vulnerabilities have stemmed from a violation of (Kerckhoffs') principle. It's been violated because RFID tags have to be inexpensive, and corners have been cut. Sometimes they get cut in places that create serious vulnerabilities.
What about that Dutch researcher who hacked the e-passport?
Juels: That was a subtle problem. The keys in e-passports depend on the way that personal information in the passport is formulated. I don't know the details surrounding the Dutch passport, but the way that the key was created from personal information--the format of the biographical information on the passport led to a very short key, something like 35 bits, so it can be easily broken. Different countries have different effective key lengths.
This also reflects a subtle design flaw on the e-passport, that biographical information alone should not have been used to create the key--it should have an additional component.
Even in the U.S., the key is a little on the short side.
The ideal length is...
Juels: Ideally we'd like to see 128-bit keys. The U.S. passport has an effective key length of about 52 bits, so it could actually be cracked by brute force (but) it would require enough of an effort so it's probably not worth most attackers' while to crack the key. But it should be longer.
How should those tackling security challenges or issues balance them against user privacy?
Juels: That's a very tricky question, and it's particularly tricky because of a shift in societal norms--the younger the computer user, probably the less intent his or her privacy concerns. With the rise of Facebook and other social-networking tools, people are much more accustomed to publishing personal information than they had previously. They are very differently sensitized to privacy issues. So again, this is where security meets psychology in interesting ways.
Now, the thing I do think is important is to build in security and privacy from the start, because it's hard to bolt them on afterwards.
What are the biggest misconceptions about RFID, or the security implications of RFID?
Juels: RFID tags to the average consumer, has some counterintuitive properties. There's a study conducted by some researchers at (the University of California at) Berkeley, for instance, that found that many people expected (to hear a beep) when RFID tags are scanned. That has very serious implications for privacy, because it means that people don't understand the tags can be scanned clandestinely. Other people think that RFID tags can be read by satellite--they can't be directly, although readers can be linked to satellites so it is possible to aggregate RFID data globally.
There's a strong disconnect between the way RFID tags work, and the way most people think they work. That can be a problem--it can mean that people ask for privacy provisions that are too stringent or are not stringent enough.
The usual remedy for this problem in the affected industry is to call for consumer education. I'm very much opposed to consumer education because I think, (firstly) there are too many calls for consumer education and we have a finite capacity for absorbing information, and secondly, often the topics that consumers have to be educated (about) are very fleeting.
So what needs to be done then?
Juels: Well, I think that the deployers and designers of RFID systems have to think through security and privacy measures that are either invisible to, or very easy for consumers to grasp, and they have to do this from the start. I don't think that privacy and security have been thought through as thoroughly as they should be in the design of EPC (Electronic Product Code) tags for instance.
Vivian Yeo of ZDNet Asia reported from Singapore.