You might think you can't have a phone conversation with someone who's deaf, but Dimitri Kanevsky would not only disagree, he'd prove you wrong.
Deaf since he was 3, Kanevsky has hardly let his disability get in the way of progress -- or success. Born in the Soviet Union, he eventually emigrated, first to Israel, and then to the United States, and went on to become a research staff member in the speech and language algorithms department at.
On Monday, Kanevsky and 13 others were honored at the White House in a ceremony to celebrate those "leading the fields of science, technology, engineering, and math for people with disabilities. These leaders are proving that when the playing field is level, people with disabilities can excel in STEM, develop new products, create scientific inventions, open successful businesses, and contribute equally to the economic and educational future of our country."
Throughout his career, Kanevsky has focused on developing technologies that help people with hearing loss. Some have made it possible to, yes, have a phone conversation, while others have enabled deaf people to talk face to face with someone and understand what's being said.
Yesterday, in, yes, a phone conversation with CNET, Kanevsky spoke about his life's work of developing systems to help people with disabilities. To do so, he used a technology that instantly transcribes both sides of a conversation and posts the text directly to a commonly viewable Web site. CNET asked if it was possible to publish an audio recording of some of the conversation along with this edited transcript, and was given permission. Below, you can listen to some of the conversation, during which Kanevsky read a transcription of the questions before responding with his answers.
Q: Thank you so much for taking the time to speak with us, and congratulations on being honored at the White House yesterday.
Dimitri Kanevsky: Thank you very much.
Could you start by giving us some details on the technology we're using to conduct this interview?
Kanevsky: Let me explain. This is transcribed by an (automated) writer. This technology I suggested 15 years ago at the onset of the Internet. I suggested at that time that court reporters could write on a Web site and I could read on a Web site what they transcribe and they will hear me over the telephone. This is how transcription over the Internet started. However, at that exact time I use various technologies to see this, (and since then) this technology improved and many other people and steno courts started to use this for transcription. And it also combines this speech recognition technology.
Is what I'm seeing totally automated?
Kanevsky: No human writer transcribed this over the Internet.
How does it work?
Kanevsky: The writer hears what we are saying on the telephone. It has some abilities like a court reporter, so it writes and puts the text on the Web site. And then we read the transcription on the Web site. This is a simple thing, but 15 years ago when I suggested it, it was a totally new thing. But then a lot of people started to use it at IBM and in many places and many court reporters developed different technologies for how to send the text to a Web site.
Can you explain a little bit about how court reporting technology works?
Kanevsky: Court reporters have special machines that they use to type phonetically. So they can quickly write very complex phrases. And their machine is connected to a computer, which is connected to the Internet.
Let's step back. I understand that you lost your hearing when you were three years old. When you were a child you must have felt there was no way you would ever be able to communicate normally. Is that true?
Kanevsky: Not exactly. Because I learned to adapt quickly and I was always with people who hear normally. So I went to kindergarten with children with normal hearing and to school with children with normal hearing. And so on. I lip-read very well from the very beginning.
You trained as a mathematician and I'm curious how it came to be
that as a mathematician you started to work on communications technology.
Kanevsky: When I was receiving my Ph.D at Moscow University, I planned to go to Israel and I knew a foreign language would be for me difficult to lip-read. For example, Hebrew has a lot of high frequency sounds, like "Shabat," and "Shalom." So I decided to develop a device from high frequency to low frequency and I brought it to Israel and it assisted me in lip-reading so I could start to speak in both Hebrew and English and understand other people.
How long did it take to develop that system?
Kanevsky: I developed the system while I waited for permission to emigrate from Russia to Israel. So, in approximately nine months I learned electronics and built this device. Then I set up a company in Israel and then we hired a professional engineer to develop this device to a level of sophistication.
Your communication innovations must have multiple uses beyond helping people with hearing loss. Since you work for IBM, how important is it to have those multiple uses in order for IBM to support your research?
Kanevsky: My mainstream work is to develop speech recognition technology and improve speech recognition accuracy. At IBM, I work as a mathematician and I apply mathematical algorithms and also have inventions in many other areas. If I have an idea how something can help [someone with] a disability, then I immediately try to develop it and apply this to the disability, and it is often because I have very broad access to wonderful technologies and people who work at IBM Research. I often can come with ideas and technology that help people with various disabilities.
You developed a device called the Artificial Passenger. Could you explain what it was?
Kanevsky: The Artificial Passenger is a technology designed to prevent drivers from falling asleep. In order to do this it talks to a driver, or it interrupts the driver. It asks the driver a question. They can discuss events that are interesting to the driver. And the Artificial Passenger can play some audio games, such as asking trivia questions. It also watches for the driver's condition. If the Artificial Passenger feels that the driver is too tired, it suggests stopping and resting. The Artificial Passenger can understand from the driver's voice or from how the driver answered questions if he or she is too tired.
I was driving with my wife at night. She was driving and I was talking to her to keep her from falling asleep. I started to think what would happen if my wife was driving alone. So I made an artificial substitute of me.
Can you explain the work that you have done for the Liberated Learning Consortium?
Kanevsky: They received a grant to develop speech recognition technologies that help deaf people take university courses. They asked IBM to help them, and at that time I had already developed speech recognition technology for IBM France that helped children at a deaf school to lip-read. I had a team of programmers and some ideas and we developed technology for St. Mary University, which expanded this technology to work over the Internet. We adapted vocabulary for educational courses so children could introduce new vocabulary. We developed a method for quick correction of errors, a friendly user interface for students and for teachers so that they could get these notes after lectures and these notes could be integrated in lecture presentations.
How hopeful are you that normal communication or somewhat normal communication is something that most deaf people can hope for in the future?
Kanevsky: I want to stress one thing: It is very important that when companies or universities communicate with people with disabilities, that they focus attention on their abilities. There are a lot of skills disabled people develop specifically because they are good in different environments. There are challenges that normal people did not have. They develop creativity and imagination, and if companies hire them, they can provide a lot of ways to help the company advance and be very competitive.
But I think technology is developing fast and the big breakthrough will be wireless communications being everywhere. So to answer your question, when wireless communications will be everywhere, then deaf people will be able to communicate normally.
Could you explain a little bit about your work in wearable technologies, and how they relate to helping the deaf. And also whether that technology is already viable in the market or when you think it will be viable?
Kanevsky: I developed electronic glasses that could print information on the lenses from a computer. So you can wear these glasses and you can see everything around and at the same time a transcription could overlay over what you see. For example, you could talk to a person, look on that person's face, and you see transcription in the lens. I developed this a few years ago, and it was very good but unfortunately remained in the research prototype. But I think eventually something will be developed as product. And this concept will be used and available for everyone.
What are some of the business challenges that have to be overcome for it
to be available for everyone?
Kanevsky: I think the biggest challenge at IBM Research is that they only develop a product if it has a billion dollar impact. Anything that has $100 million or less is good, but usually IBM outsources these to other companies. So the technology was outsourced to a company that ended up having financial difficulties. So we need to wait until another company wants this $100 million business.
What was it like to be part of the White House ceremony on Monday?
Kanevsky: I was honored to participate in the White House event, and to be recognized for work that we did at IBM. I think this is very important for people with disabilities to know that their work can be recognized at such a high level as the White House. And definitely, it will help to advance new ideas for people with disabilities. It is a good role model for them.