X

Typing is so 19th century: CES panelists discuss its replacements

Gestures and voice control could soon become the most common way to interact with your devices, as keyboards go the way of the dinosaur.

Laura Hautala Former Senior Writer
Laura wrote about e-commerce and Amazon, and she occasionally covered cool science topics. Previously, she broke down cybersecurity and privacy issues for CNET readers. Laura is based in Tacoma, Washington, and was into sourdough before the pandemic.
Expertise E-commerce, Amazon, earned wage access, online marketplaces, direct to consumer, unions, labor and employment, supply chain, cybersecurity, privacy, stalkerware, hacking. Credentials
  • 2022 Eddie Award for a single article in consumer technology
Laura Hautala
3 min read
next-big-thing-with-siri.png

CNET hosts Tim Stevens and Brian Cooley talk with the voice behind Siri, Susan Bennett, at a panel on the technology that will replace typing.

James Martin/CNET

Typing will soon become a thing of the past.

That was the prediction debated by a group of speakers the Consumer Electronics Show in Las Vegas on Wednesday in a panel hosted by CNET.

New technology is here to replace that tip-tap sound of typing echoing throughout your office and home, as well as buttons, knobs and other user interfaces. What will it look like? Mostly like two technologies we're already familiar with: Siri and Kinect.

That is, voice and gesture recognition.

To start the panel off, CNET hosts Tim Stevens and Brian Cooley chatted with Susan Bennett, the voice of Siri. Apple's voice recognition software represents one of today's most widely used replacements for typing. In fact, the voice of Siri gently teased the hosts over the PA system, interrupting their presentation and announcing that she's "always listening."

Then Bennett herself came onstage and described working with the engineers who developed Siri before Apple bought the technology. "We certainly couldn't imagine the phone would be here," Bennett said of the project's early days.

A panel of experts also took the stage to talk about just how far off this world of simply talking and gesturing with our devices might be.

Pattie Maes, department head of MIT's Media Lab, talked about the limitations of "natural language," or spelling out exactly what you want by talking. That process isn't always more convenient than tapping some keys or a button, she said.

What's more, a lot human speech is paired with facial expressions, gestures and body positioning, which limits its use further, she said. "It will always be limited to certain applications," Maes said.

Vlad Sejnoha, an executive at Nuance Communications, disagreed, saying machines could receive commands from language because we can. "We seem to manage," he said.

Marcus Behrendt, head of user experience at BMW, said, Maes had a point about the fact that sometimes it's cumbersome to string together a whole sentence. "Sometimes you need a knob" he said.

Wendy Ju, executive director for interaction design research at Stanford University, pointed out that we already command technology using gestures, but added that more could be done to make the process interactive.

She pointed to observations of automated doors at retail outlets. When a store has just closed, a customer might walk up and expect the door to open. When it doesn't, they'll "walk louder," Ju said, miming an exaggerated, stomping walk that might trigger an automated door. A better door, or other motion-reading device, would find a way to communicate why it wasn't opening to let the user know it wasn't broken.

All the smartwatches and fitness trackers of CES 2016 (pictures)

See all photos

Interpreting gestures could also save lives. Behrendt pointed out that "drowsiness detection" in many cars currently reads the body language of a driver and take measures to prevent an accident. In the future this could also detect whether a driver is overly stressed, as well.

But in other devices, gesture control hasn't progressed as far. What's holding it back? One example is the small interface of the smartwatch.

"The problem with smartwatches is that they're tiny for interaction, as well as output," Maes said. Advances in gesture control could let you "wave your hands above the watch you're dealing with," she said.

And though there's the potential that more devices on our bodies could drive us to distraction, Ju said she hoped that by learning to read our gestures, devices could actually prevent us from getting distracted.

"Our most valuable resource will be our attention," Ju said. One pictures the ill-fated Microsoft Office animated paperclip called Clippy, who came to be despised for jumping up in the margins of word processing documents to offer unwanted assistance. Except now he's on our Google Glasses asking us if we need help ordering at Starbucks.

But Instead of feeding us more and more information as we go about our daily lives with smart watches, phones and fitness trackers strapped to ourselves, devices could help with "managing the traffic flow of information," Ju said.

Maes concluded by musing on how well our devices will get to know us once they start really understanding our speech and gestures. They could "help us become a better version of ourselves," she said. "We'll eventually all be cyborgs."