Apple patent application blends touch, voice, face
If it ever comes to pass, new Apple technology could let you manipulate an object onscreen using a combination of multitouch, voice recognition, and 'gaze vector' commands.
Apple may be thinking about adding new ways to improve the multitouch interface work that is central to the company's plan for the future.
Unwired View unearthed a patent application filed by Apple (thanks, Gizmodo) containing ideas for a user interface system that builds on the multitouch input used on the iPhone by adding technology for voice recognition and even facial recognition.
Wayne Westerman and John Elias, the brains behind a multitouch interface company called Fingerworks, acquired by Apple in 2005, are listed as inventors on the patent application, as they have been for several other multitouch patents coming out of Apple.
The idea behind the latest patent application is to combine input from different sources, whether that's the now-familiar iPhone multitouch concept, voice recognition, or facial expressions.
Systems may have multiple input means. However, each input means is typically operated independently...in a nonseamless way. There is no synergy between them. They do not work together or cooperate for a common goal such as improving the input experience.
That's what Apple hopes to do with the system it's trying to patent--combine multiple forms of input in order to more efficiently control a computer. For example, you could select an object with a finger gesture, order the computer with a voice command to change that object's color to blue, and then tell the computer where you want to place the object by staring at the lower right-hand corner of the screen.
Patent applications, as a rule, are designed to cover as wide an array of possible applications for the technology as the author can think of, so don't expect to see a Mac or iPhone with all of that stuff just yet. Still, it does seem that Apple is putting an these days.