Cool Kinect move: Reading sign language in real time
Here's another example of how the Kinect works great as a low-cost Swiss Army Knife for human and computer interaction: the multicamera device reads sign language like a champ.
Earlier this week at Microsoft's DemoFest in Redmond, Wash., the company's research arm showed off an incredible union of technologies that could finally usher in an inexpensive solution for people who want to communicate with a computer through sign language.
Dreamed up by researchers with the Chinese Academy of Sciences and Microsoft Research Asia, the system involves the use of a Kinect camera sensor, Bing's translation services, and some recognition software that detects American and Chinese sign language and converts it into computer text -- all on the fly.
Kinect's advanced body-tracking sensors and ability to read 3D depth of field provide what seems like a perfect platform for tracking the complex hand movements associated with sign language.
The interactive interface, described further in the research paper Sign Language Recognition and Translation with Kinect (PDF), also has a visual solution for people who want to speak with deaf and hard-of-hearing people in a unique way. After someone types text into the program, an animated onscreen avatar performs the sign language counterpart. The other person in the conversation can then reply to that message by performing sign language to the Kinect, and those signs get translated into text.
"While it is still a research project, we ultimately hope this work can provide a daily interaction tool to bridge the gap between the hearing and the deaf and hard of hearing in the near future," said Guobin Wu, a manager with Microsoft Research Asia.