Researchers at the University of Rochester have developed software that can really understand you when you talk to it, maybe not quite like H.A.L. in 2001, but a lot closer to it than ever before.
The software is different because it not only understands spoken words--as can other speech recognition systems--but can infer the meaning of what you say from the context of an entire conversation by analyzing the dialog and structure of sentences, according to a report in New Scientist magazine.
The software is not yet a commercial application, although it's built atop a standard workstation from Sun Microsystems. The software also works best in highly defined scenarios, such as a demonstration application that lets a user route imaginary trains between cities in the northeastern U.S. through verbal commands. But the system can also adapt its response based on the context of the dialog and make logical replies, even when it mishears spoken words.
Researchers concede that the system still has some kinks. But in tests, it successfully carried out spoken commands more often than not. While conventional speech recognition systems understand about 40 percent of the words in a conversation, the researchers project recognized roughly 75 percent, according to the researchers.