Engineers test sign language on cell phones

University of Washington researchers are testing a tool called MobileASL that uses motion detection to identify American Sign Language and transmit images over U.S. cell networks.

Josiah Cheslik, a UW junior and volunteer in the MobileASL field study, demonstrates using the phone to communicate in his native language. Pete Michor, another participant in the study, is seen in the background. University of Washington

We all know what it's like to send a text message or e-mail whose tone is completely misinterpreted. A series of additional messages to better explain ourselves ensues and the efficiency of the original message is long gone.

That's one reason engineers at the University of Washington are testing a tool called MobileASL that uses motion detection to identify American Sign Language and transmit images over U.S. cell networks. Sometimes, words alone just don't cut it.

"Sometimes with texting, people will be confused about what it really means," says Tong Song, a Chinese national who is studying at Gallaudet University, a school for the deaf in Washington, D.C., and participating in UW's summer pilot test. "With the MobileASL, phone people can see each other eye to eye, face to face, and really have better understanding."

Eve Riskin, a UW professor of electrical engineering, says the MobileASL team's study of 11 students is the first to examine how deaf and hearing-impaired people in the U.S. use mobile video phones. The researchers plan to launch a larger field study this winter.

The engineers are now working to optimize compressed video signals for sign language, increasing the quality of the images around the face and hands to reduce the data rate to 30 kilobytes per second. To minimize the amount of battery power, the phones employ motion sensors to determine whether sign language is being used.

Of course, phones like the HTC Evo and Apple iPhone 4 already offer video conferencing, but broadband companies are now blocking the heavy-bandwidth conferencing from their networks and introducing tiered pricing plans to account for heavy data usage.

MobileASL software could potentially run on any device whose video camera is on the same side as the screen. University of Washington

Meanwhile, the UW team estimates that MobileASL uses just a tenth the bandwidth of the iPhone's FaceTime video conferencing and could be integrated into any phone that has a video camera on the same side as the screen. In other words, you don't have to be able to afford an iPhone 4 and the bandwidth charges to use MobileASL.

"We want to deliver affordable, reliable ASL on as many devices as possible," Riskin says. "It's a question of equal access to mobile communication technology."

So far Riskin's team has learned that of the 200 phone calls made so far in the study, the average lasted about 90 seconds. "We know these phones work in a lab setting, but conditions are different in people's everyday lives," Riskin says. "The field study is an important step toward putting this technology into practice."

The sample size of 11 students is undeniably small, and each student has been deemed an academically gifted deaf or hard-of-hearing pupil interested in pursuing computer careers, so the researchers may not identify user interface issues until a larger study of more than just tech-savvy users is conducted.

Autoplay: ON Autoplay: OFF