X

Chips promise to boost speech recognition

Carnegie Mellon University researchers are designing processors to solve the notoriously difficult computing problem.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
2 min read
PALO ALTO, Calif.--Researchers at Carnegie Mellon University are using custom computer chips to tackle a problem in speech recognition that software largely hasn't been able to solve.

Speech recognition has long been a computer industry dream--but it never has become practical reality for most computer users. But researcher Rob Rutenbar argues that using a custom processor rather than software will improve speech recognition speed and lower its power consumption.

"It's time to liberate speech recognition from the unreasonable limitations of software," Rutenbar said here Tuesday at the Hot Chips conference. He likened the situation to the now-widespread use of special-purpose hardware for graphics.

Faster chip-based speech recognition will enable video players to search rapidly for Arnold Schwarzenegger saying "Hasta la vista, baby," in a movie, he said. And lower power consumption will enable a cell phone to take dictated notes.

So far, researchers on the university's "in silico vox" project are working on two chip approaches, one using custom chips called ASICs (application-specific integrated circuits) and another using reconfigurable chips called FPGAs (field programmable gate arrays). Rutenbar showed a videotaped demonstration of the university's technology using a low-end FPGA to recognize words in a limited 1,000-word vocabulary.

The system recognized several short sentences at about twice the speed it took for researchers to speak them. At the same time, its accuracy was about the same as that of Carnegie Mellon's Sphinx speech recognition software.

Rutenbar said the researchers estimate their first-generation custom chip approach will be faster--nearly twice the rate of regular speech for a 5,000-word vocabulary. They're also working on a custom chip that will work at 10 times the spoken rate, with later goals including speed-up factors of 100 and 1,000, he said.

The speech recognition chip's duties begin with converting an audio signal into combinations of noises that form any of about 50 different sounds, such as "n," in English. That's tricky--the sound of the "i" is different in the word "five" than in the word "nine" because of the different sounds pronounced immediately before and after, so in effect there are more than 1,000 sound possibilities, he said.

Next, the chip compares those sounds to those used in actual words. Finally, it looks for likely combinations of words--both pairs and threesomes--to improve accuracy. The upshot is that the chip's performance is dependent on high memory communications bandwidth so it can make comparisons quickly, he said.