X

Microsoft aims to get more touchy-feely

At a conference this week, the software giant plans to detail several new research efforts in the field on new user interfaces. We've got the early scoop.

Ina Fried Former Staff writer, CNET News
During her years at CNET News, Ina Fried changed beats several times, changed genders once, and covered both of the Pirates of Silicon Valley.
Ina Fried
4 min read

Bill Gates may not be hanging around Microsoft's research labs 24/7, but his vision for going beyond the mouse and keyboard seems to be doing pretty well without his day-to-day oversight.

At a user interface conference this week, the software maker plans to present several research papers, including a number designed to take the multitouch interface used in Microsoft's Surface and expand it into new arenas.

Although Microsoft's tabletop computer is still in the midst of its earliest commercial deployments, the company is already hard at work trying to figure out where the technology can go next.

Andy Wilson, one of the Microsoft researchers who helped create the Surface, is among those presenting at the User Interface Software and Technology conference, which is being held in Monterey, Calif. He is set to talk about how the same kind of physics engines used in 3D games could help make surface computing much more realistic.

Although multitouch computing is a huge leap forward in making computer objects feel more tangible, the illusion is challenged because all touch is treated the same, unlike in the real world where we can touch lightly, or push, or grab an object.

Click for gallery

While a child using Surface for the first time will tend to use his or her whole hand to interact with objects, adults learn to use just a fingertip because they quickly realize that essentially the computer is only recognizing a single point for each "touch."

"The problem with that is you are flushing away a lot of the subtlety," Wilson said.

But, if the physics engines were better, Wilson says, objects can be folded and twisted and even torn like a piece of paper.

"How can we enhance the interaction model so we don't fall into this trap of thinking of every contact as a discrete point?" Wilson said. In his paper, he suggests a few different interactions, showing how a user can grasp a solid object and interact with it (say rolling a ball), or fold or tear an on-screen piece of cloth.

Another team of researchers from Microsoft's Cambridge, England, lab is showing a technique called SecondLight that allows a surface computer to project two images, one on the computer's surface and the other at some other point in the air.

Watch this: Daily Debrief: Decoupling from the PC interface

This one's a little harder to explain. Essentially, the surface of the computer is one that quickly alternates between a transparent display and one that catches an image. The projector is in sync with this alternating pattern and sends one image when the display is transparent and a second when it is not. The first image is projected above the device, while the second appears on its surface. Because the images can alternate faster than the eye can detect, both images appear to be constant.

Real-world applications
Among the potential applications for this would be gaming. Clear plastic pieces could sit on top of the game and become chess pieces or checkers or other game tokens as needed. Medical imaging could be another interesting use, where doctors could look at an entire X-ray on the main display and hold up a piece of paper to see a second image, perhaps a close-up or an earlier X-ray.

"We're actually bringing the display into the real world," said Steve Hodges, one of the researchers behind SecondLight.

Such a move also helps break one of the inherent limitations of current surface computing. "It's still bound to the surface," Hodges said. "You are interacting on the surface."

One of the nice things about the SecondLight approach is that although the technology is complex, the objects that interact with the computer can themselves be simple. "All the peripherals are very cheap, either bits of plastic or pieces of paper," said Sharam Izadi, another researcher on the project.

"Across Microsoft Research, in different parts of the world, there's a strong theme of finding new ways of interacting. These projects all relate and overlap at the edges."
--Steve Hodges, SecondLight researcher

Microsoft is also presenting a round surface computer prototype known as Sphere, which CNET readers got a look at back in July.

Another touch research project is aimed at trying to record gestures without using the screen as the surface. Microsoft already explored one notion, dubbed LucidTouch, in which users could control a screen by moving their hand below the device. Microsoft tries a different approach in its latest project, dubbed SideSight. In this example, the device sits flat on a table, while infrared sensors on the side of the device can record gestures made on either side of the display.

Such alternatives are important for two reasons. One, on very small devices, there is often not enough screen real estate for a touch screen. Secondly, by their nature, the very thing being pointed at is blocked while someone is touching it, hampering the ability to be precise. Both LucidTouch and SideSight are aimed at, quite literally, getting around these issues.

"Across Microsoft Research, in different parts of the world, there's a strong theme of finding new ways of interacting," Hodges said. "These projects all relate and overlap at the edges."