X

Sensors will be key to future computing, Intel says

Computers will be capable of interpreting human behavior--once sensors intimately connect the physical and digital worlds--speakers say at Intel Developer Forum's "Zero Day."

Vivian Yeo Special to CNET News
3 min read

SAN FRANCISCO--Intel is working on future technology that is capable of understanding human behavior and pointing people to the appropriate course of action.

Mobile devices of tomorrow will be smaller, yet equipped with more powerful computing capabilities, and enjoy platform-wide power efficiency, Mary Smiley, Intel's director of emerging platforms, told the media here on Monday, or "Day Zero," of the Intel Developer Forum.

A key feature of such devices will be sensors that provide the ability to understand the world of the users, as well as the "situational awareness" to provide inference and guidance.

"Tomorrow, (those devices) will have such a deep understanding of you, they will scream out what's important to you," Smiley said.

At the heart of connecting the physical and digital worlds are sensors, Andrew Chien, vice president of the corporate technology group and director of Intel Research, noted in his presentation on sensing technologies.

Chien highlighted several research projects, including an initiative known as "everyday sensing and perception," or ESP, which began in the fourth quarter of 2007.

The idea behind ESP is to make computers become more aware of their users and context in everyday activities and environments, he said, adding that Intel and its partner academic researchers aim to achieve 90 percent accuracy for 90 percent of a typical person's daily life. Such technology involves a range of capabilities, from low-level sensing to high-level understanding that can interpret movement, emotions, and words.

One research application of ESP is to infer activities using visual object recognition, which involves the use of "egocentric video" captured by a mobile camera worn on the shoulder. At present, the automatic system can achieve between 75 percent and 90 percent accuracy of seven objects; Intel hopes to scale this up to hundreds of objects and video hours, said Chien.

One challenge, however, is power efficiency, Chien pointed out. Real-time video event detection currently requires about 4 teraflops and consumes 10 kilowatts of power. In future, the Santa Clara, Calif.-based company hopes to lower power consumption to less than 1 watt on a handheld, he said.

Beyond connecting the physical and digital worlds, Intel's connected visual computing vision also aims to bridge the two with a third--the virtual world--using visually rich interfaces.

Jim Held, Intel Fellow and director of tera-scale computing research, noted that there are over 2,000 virtual worlds today, and many are merging with popular social networks. Augmented reality--combining real-world information with data overlays--is also evolving, he added, with mobile augmented reality becoming more "compelling".

However, connected visual computing demands more from servers, clients and network, said Held. To that end, Intel's research in this area will cover four broad areas: platform optimization, distributed computation, visual content, and mobile experience.

In the area of visual content, for example, researchers are working on parameterized content, which offer manipulative tools or dynamically adjustable parameters for people to create customized faces with more concise expressions.

Vivian Yeo of ZDNet Asia reported from San Francisco.

Click here for full coverage of the Intel Developer Forum.