CNET también está disponible en español.

Ir a español

Don't show this again

Christmas Gift Guide
Tech Industry

EMMA hitches handwriting, speech, keypad

Like the Jane Austen heroine, a newly drafted Web standard called EMMA has a penchant for matchmaking, letting Web pages and applications interpret an array of interaction methods.

Like the Jane Austen heroine of the same name, a newly drafted Web standard called EMMA has a penchant for matchmaking.

The World Wide Web Consortium (W3C) on Monday published the first public working draft of the Extensible MultiModal Annotation (EMMA) language, designed to marry an array of interaction methods to a single data exchange format.

Currently, people can access Web pages and applications through a variety of interface types, or what the W3C calls "modes." These include computer keyboards, telephone keypads, speech-recognition applications, and handwriting-recognition devices.

But the Web lacks a standard way of interpreting those different methods of interaction and sorting through ambiguities introduced by inexact input techniques such as handwriting and speech.

That leaves Web developers to craft their own interpreters, typically with scripts or programming languages.

A so-called declarative method like EMMA, based on the digital document lingua franca Extensible Markup Language (XML), simplifies as well as standardizes the process of interpreting multimodal interaction, according to the W3C.

"Clearly there's great potential for multimodal technologies' giving people the ability to choose how they interact with applications," said Dave Raggett, a Canon consultant and W3C fellow. "EMMA is meant to be a simple way of dealing with all these different kinds of input. The application just has to deal with XML input, so EMMA simplifies the way the application is constructed."

EMMA, a data exchange format, forms just one piece of the W3C's plan for standardizing multimodal interactivity, Raggett said. Others include the consortium's Ink Markup Language for handwriting recognition and its framework for the project as a whole.

Multimodal computing applications have attracted the interest of industry heavyweights in recent months. IBM introduced a multimodal software toolkit for use with Linux computers, and Microsoft has put its muscle behind technologies for advancing voice interaction technology.

Working group members include representatives of Alcatel, Apple Computer, AT&T, Canon, Cisco Systems, Electronic Data Systems, Ericsson, France Telecom, Hewlett-Packard, IBM, Intel, Microsoft, Mitsubishi Electric, Motorola, NEC, Nokia, Nortel Networks, Opera Software, Oracle, Panasonic, Sun Microsystems and Voxeo.