EmotionML: Will computers tap into your feelings?
"Affective computing" standards and technology could help computers grasp human emotion. An intuitive user interface sounds nice, but what about information overshare?
For all those who believe the computing industry is populated by people who are out of touch with the world of emotion, it's time to think again.
The World Wide Web Consortium (W3C), which standardizes many Web technologies, is working on formalizing emotional states in a way that computers can handle. The name of the specification, which in July reached second-draft status, is Emotion Markup Language.
That might sound alien to the cold calculating ways of a computer. Let's face it, compared with most computer interaction, HAL 9000 sounded positively genial in "2001: A Space Odyssey" when he said, "I'm sorry, Dave, I'm afraid I can't do that."
But the Multimodal Interaction Working Group that's overseeing creation of the technology really does want to marry the two worlds. Some of the work is designed to provide a more sophisticated alternative to smiley faces and other emoticons for people communicating with other people. It's also geared to improve communications between people and computers.
"Today's computers force humans to adapt to them, which causes more and more difficulties to most people," said Marc Schroeder, editor of the EmotionML standard under development. "We have adapted to the 'rationality only' interaction mode that these electronic devices impose on us. Future computers should be able to interact with humans in ways that humans find natural."
The idea is called affective computing in academic circles, and if it catches on, computer interactions could be very different. Avatar faces could show their human master's expression during computer chats. Games could adjust play intensity according to the player's reactions. Customer service representatives could be alerted when customers are really angry. Computers could respond to your expressions as people do. Computer help technology like Microsoft's Clippy or a robot waiter could discern when to make themselves scarce.
"Rather than having to click the 'no' button on some touch screen, I would rather shake my head," Schroeder said. "Without having to consciously decide to do so, I will show a puzzled and confused facial expression, and a human would know that I need advice and guidance." Computers could adapt to this human communication style, he said.
Indeed, some are betting they will. Rosalind Picard, an affective computing expert from the Massachusetts Institute of Technology, is now co-founder and chief scientist of a start-up called Affectiva that sells a USB product called the Q Sensor for emotion detection in circumstances such as gauging the effectiveness of advertisements.
For the technology to catch on, though, a standard way to describe emotions is only one hurdle. Technical developments also are essential. And even then, people will have to adapt and decide whether they really want to infuse their communications with emotion and when they want expose that information.
Marrying logic and emotion
The effort is something of a contradiction. EmotionML embodies two very different forms of expression--the squishy nature of emotion and the rigorously precise language of a standard.
Yet the idea is not as wacky as it sounds. Does computer-based communication have to be so fraught with mixed signals?
For example, one person's joke or gentle jab might seem to the recipient a serious criticism. A hastily typed text message can come across as brusque. Sarcasm sometimes falls flat. Even emoticons, if the recipient successfully decodes them at all, can be misunderstood.
Some little ad-hoc conventions have evolved to deal with the problem. Many times, I've employed faux tags to mark up my instant messages to colleagues--closing a comment with a "kidding>" tag to inflect what I said with the proper overtone. Setting off a block of text within "
So systematization could well be useful. But that doesn't make it easy.
Schroeder knows well there are plenty of difficulties. He listed three: shortcomings in sensing technology, such as error-prone facial expression recognition; the possibility that an erroneous reading of emotion could cause more damage than no reading at all; and insufficient quality in technology to express emotion, for example through synthesized speech.
In these areas, though, there's plenty of research and progress, he said. And when it comes to the difficulties of capturing subtle emotional states, EmotionML is designed to offer a much more colorful palette than what's conventional today.
"We have observed that when engineers naively try to build 'emotion' into their systems, they are thinking in terms of basic emotion categories such as 'happy,' 'angry,' 'sad," etc. However, these gross terms do not capture the vast majority of the information that we humans use to make sense of emotions," Schroeder said. "For example: How intense is the emotion? What type of feeling is it, active or passive, positive or negative, powerful or helpless?"
The draft lists not just possible emotional states but indeed five sets of them from which programmers can choose. Extending the painting metaphor, one includes just the primary colors for those who just want the basics.
It dates from a Paul Ekman 1972 publication (PDF) and is based on six universal facial expressions: anger, disgust, sadness, happiness, surprise, and fear.
Other vocabularies have a broader list, but venture closer into gray areas. Where do you draw the line between "contempt" and "disgust" in the 24-state list from a newer 2007 paper.
"EmotionML markup MUST refer to one or more vocabularies to be used for representing emotion-related states," the draft standard says. "Due to the lack of agreement in the community, the EmotionML specification does not preview a single default set which should apply if no set is indicated. Instead, the user MUST explicitly state the value set used."
Wait--the W3C is trying to standardize emotional states but can't even pick a single set of descriptors? Schroeder said the problem is simply too complicated to boil down to one set, but a few are better than nothing.
"It is impossible to impose one single emotion vocabulary," he said. "EmotionML is striking a balance, providing a carefully selected set of 'recommended' vocabularies" and documenting how to use them.
Things get more complicated, too. Moving beyond emotional states, there are "emotion dimension sets" that qualify emotions with attributes such as intensity and whether they're being experienced as positive or negative.
A dark side?
But there could be a dark side, too, opening new class of worries for those online.
Might a company target you with particular advertising if it knows you're jubilant or despairing? Might a computer employ new subconscious communication to subtly manipulate your behavior? Do you really want somebody at the other end of a conversation knowing your emotions--or if you can disable communication sharing, why you did?
After all, some of the technology used in emotion detection is the same used in lie detectors. Emotion detection could be a handy tool for surveillance, making police more effective but also potentially more intrusive. Big Brother could find out not just what you wrote, but what you felt.
EmotionML could help bring the pluses and minuses of this technology to fruition faster by providing a means for different technology elements to communicate better--for example, decision-making software employing sensor data.
I suspect if the technology really does mature to this state of usefulness, humans will of course adapt. Just as we've had to adjust to the risks of the ill-advised instant message, the constant distractions of today's communications, the exposure of videoconferencing, we'll have to adjust to the combination of emotions and computers.
One possibility: if people are drawn to the benefits of emotionally sophisticated computer interactions, perhaps we'll figure out when it's best to switch it off and just pretend we're Spock sometimes.