SAN FRANCISCO--Rather than teach your gadgets what to do, Intel researchers say that in the not-too-distant future they will learn about you on their own. That means where you are, how you're feeling, and what you want.
It's actually not as creepy as it sounds. Intel Chief Technology Officer and Director of Intel Labs Justin Rattner took the stage Wednesday at the annual Intel Developer Forum here to talk about the future of "context-aware computing," what Intel is doing about it, and how gadgets can make life easier for their owners, but in a way that the owners can control.
Context-aware computing is Intel's term for devices that anticipate what people need or want and guide them accordingly. The context is gathered through a combination of "hard sensors"--cameras that detect movement and GPS-based location information--and "soft sensors"--such as calendar information or pieces of data you input into a device.
The most consumer-friendly example is something Intel Labs has been working on for a while called the Personal Vacation Assistant. It's a mobile device (still in prototype phase) that looks a lot like an oversized GPS. It uses what Intel is calling "context" to help travelers make decisions about stuff to do while playing tourist. Your personal travel preferences (where you like to stay, things you like to do), combined with data about stuff you've already done, your location, and your calendar schedule will help the device make on-the-spot recommendations for sights to see, places to eat, and more. At the end of a trip, the device can auto-generate a travel blog too, including photos and videos.
Intel and Fodor's have already conducted a test putting the device to work with a few dozen visitors to New York City.
The sensors built into devices are what will make devices gather and process context, said Lama Nachman, an Intel Labs researcher who joined Rattner on stage.
"Sensing is at the core of these context-aware devices," she said.
She showed a prototype TV remote with a sensor pack that can detect who is holding the remote as an example. The remote can tell who is watching TV based on the movements the holder of the remote makes. It uses a process Intel calls "unsupervised learning," which means the sensor on the remote is always on and always learning in the background. When it figures out who it is, the remote can then make personalized recommendations of shows the user may want to watch. Those recommendations appear in the form of a pop-up menu on the TV screen.
Besides coming up with practical applications, Intel is in the process of fusing all of this hard and soft sensor data into a platform that can be controlled by users. It's sort of a "digital rights management" for context-aware devices, said Rattner.
"We need a cognitive framework for managing context," he said. "So users can share what context is released, to whom, and when it expires."
He didn't make any bold predictions about how many devices we'll see like this, but he did say we "can expect to see these features appearing in Intel products in the not-too-distant future."