X

'Social X-ray specs' help us read emotions

New tech could help people translate body language by dissecting thousands of subtle indicators--including arched brows and parted lips--better than most humans can.

Elizabeth Armstrong Moore
Elizabeth Armstrong Moore is based in Portland, Oregon, and has written for Wired, The Christian Science Monitor, and public radio. Her semi-obscure hobbies include climbing, billiards, board games that take up a lot of space, and piano.
Elizabeth Armstrong Moore
2 min read
Do you really want to know what she's thinking? Tim . Simpson/Flickr

Dr. Cal Lightman is about to be out of a job. The micro-expression expert central to the TV show Lie to Me could soon be joined by legions of fellow human lie detectors--but instead of squinting intently Lightman-style, they'll be wearing high-tech specs.

So hopes electrical engineer Rosalind Picard at the Massachusetts Institute of Technology's Media Lab, who recently shared a pair with journalist Sally Adee for the magazine New Scientist.

In her interview, Adee describes the sensation of wearing the glasses, which featured a blinking red light alerting her to the general confusion and utter boredom of Picard mid-interview, as being akin to "an extra sense."

The potential of possessing such a sense raises a few different questions, not least of which is whether one really wants it in light of the harsh truths that may be revealed.

Perhaps equally disturbing for some is that a program is able to pick up on emotions better than the average person, not to mention someone with autism, which is the audience for whom these glasses were first devised. In one study, the average person managed to correctly interpret only 54 percent of expressions on the faces of real people who were not acting.

"People are just not that good at it," Picard told Adee.

But while the software is better, it is by no means perfect, and was only able to correctly identify 64 percent of the expressions.

The prototype, which Picard says quickly proved popular among people with autism, incorporates a tiny camera that tracks 24 "feature points" on a face to detect micro-expressions--not only by type but also by frequency and length. The resulting data is then compared with a database to identify six general facial states: thinking, agreeing, concentrating, interested, confused, and disagreeing.

Picard and colleague Rana el Kaliouby have founded a company called Affectiva to sell their expression recognition software and continue to fine-tune the algorithm. The company is also in talks with a Japanese firm that wants to use the software to distinguish between 10 different types of smiles on Japanese faces, including bakushu (happy smile), shisho (inappropriate giggle), and terawari (acutely embarrassed smile).

In the event that expression-reading glasses become mainstream, I offer one prediction: those of us who are shy about being read like open books will rely increasingly on that oh-so-simple art of texting--replete, of course, with emoticons.