A group of researchers says shoes may be the next thing in the busy field of wearable computers and gesture interfaces.
Computer scientists from the Telekom Innovation Laboratories, the University of Munich, and the University of Toronto this week published a paper on ShoeSense, a wearable computing system for a smartphone.
It's one of many gesture interface-related papers being presented this week at the Conference on Human Factors in Computing Systems (CHI 2012) conference, which is sponsored by the research arms of Microsoft, Google, eBay, and other tech companies.
Wearable computing got a high-profile plug when Google introduced Project Glass, a set of glasses that does what a smartphone can but is apparently operated by eye gestures, head motions, and a button for taking photos, according to a demonstration from one of its makers.
Developing alternative inputs for smartphones makes sense when a person is moving or engaged in other tasks, such as driving, or when it's inappropriate to pull out a smartphone, such as during a family dinner, the ShoeSense developers said in a paper.
Its developers envision a sensor being placed in a shoe that is able to understand customizable hand and arm gestures. In a video, a user moves his finger along his forearm to turn up the volume on a music player in his pocket, pinches to select the next track, and then pinches with three fingers to send an "I will be late" e-mail to his wife.
Having a sensor device in a shoe has advantages over glasses in that it allows for eyes-free interaction, and it doesn't constrain body motions. ShoeSense's designers also think that it can be more socially acceptable to operate a smartphone through arm and hand gestures than via glasses. Potentially, the sensor could be powered by a walking motion.
"ShoeSense introduces a novel and unique perspective (from the shoe), making it possible to recognize discreet and relaxed, as well as large and demonstrative, gestures without the need for cumbersome hats or body-mounted sensors," according to the paper.
For a working demonstration, the sensor was actually a Microsoft Kinect game controller, which includes a depth camera able to recognize gestures, but the researchers envision shoe sensors small enough to be strapped onto shoe laces.
As sensors get smaller and less expensive, computer scientists are exploring a wide range of gesture-based interfaces. These can be used to interact with existing devices or with other objects in a building.
Also at CHI, Microsoft Research announced SoundWave, a way to operate a laptop computer using hand gestures by measuring the changes in sound waves emitted from the computer. Another paper was on Touche, a gesture interface system designed by Disney Laboratories and Carnegie Mellon University that uses "smart doorknobs" and gesture-operated table tops.
There were also a number of papers at CHI presented on touch screens, which, with the soaring popularity of tablets and smartphones, have given life to an active field of research.
A lot of the research in computing interfaces and wearable computers reflects the new possibilities for bridging the digital and physical worlds using sensors, such as Kinect. But whereas the multitouch devices have caught on rapidly, outside of video games, gesture interfaces still face the challenge of finding real-world uses and applications.
Be respectful, keep it civil and stay on topic. We delete comments that violate our policy, which we encourage you to read. Discussion threads can be closed at any time at our discretion.