A higher-precision image sensor approach lets your phone detect fine movements like finger-pinch gestures made away from the screen.
BARCELONA -- With chip-based camera technology from Rambus, your next smartphone may understand a new range of commands issued by moving your hands and wiggling your fingers in front of the screen instead of by touching it.
With the technology, which sandwiches a thin layer of finely etched plastic or glass on top of a tiny image sensor, the chip-tech company believes it can dramatically improve gesture recognition. The technology could also help self-driving cars recognize oncoming traffic at night, improve virtual reality headsets and power eye-tracking equipment that monitors exactly where a person is looking, said Patrick Gill, principal research scientist at Sunnyvale, Calif.-based Rambus.
Gesture recognition lets people interact with devices by waving hands or arms around, but today it's generally good only for coarse, sweeping motions that can do things like flipping pages in an e-book or changing channels. Rambus' approach, which it showed off this week at the Mobile World Congress show here, is designed to recognize more detailed gestures like a finger-pinch to zoom in on an image. Unlike with touchscreens, though, the gestures are made in the air in front of the screen.
The technology cuts power consumption too, letting the technology work for battery-powered devices that need computer vision abilities. For example, a bus stop could recognize when people have arrived and tell the transit system it's time to send a bus.
The potential for such progress illustrates that the shift from film cameras to digital imaging is still in its early years. Rambus' technology isn't good for conventional digital cameras like smartphones or SLRs, but it could help spread some useful vision skills to super-small computing devices. The ubiquity of smartphone cameras has transformed our society, but that could be only the beginning when you imagine tiny computer eyes spreading to everything from driver-watching dashboards to floor-cleaning robots.
"Our aim is to be able to add eyes to any digital device, no matter how small," Gill said.
The technology should arrive in consumer products in 18 months to 24 months, said Kendra Da Berti, director of solutions marketing at Rambus.
Rambus isn't the only company working on cameras that are far removed from the days of film. Intel's RealSense 3D cameras are built into the Dell Venue 8 7000, giving the tablet an ability to paint a 3D picture of what's in front of it. Mountain View, Calif.-based startup Lytro, while not commercially successful, has developed light-field camera technology that lets a photographer adjust image aspects like focus after the shot is taken. These designs rely on a technology called computational photography, which means computer processing is an essential step in generating an image.
In Rambus' case, that step is called deconvolution. The transparent plastic lens layer in front of the image sensor has been etched with very thin lines into what's called a diffraction grating. These lines change how light travels through the lens and onto the sensor.
The result isn't a bitmap image of reality, like a conventional camera with a curved lens generates, but instead a bloblike pattern. But with the deconvolution process -- baked into Rambus chip hardware for fast execution -- the original scene can be reconstructed rapidly.
At 0.055mm across, the optical sensor itself is much smaller than the devices used today for gesture recognition, but packaged on its chip it's in the same ballpark as the 1.5mm ball lenses common in gesture-recognition cameras. Still, it's able to capture more detail -- not simply the four-position up/down/left/right information of gesture recognition but a 200x200 pixel array that captures enough detail to count the fingers visible in a hand, Gill said. Moving to fewer pixels would cut power consumption but still work for many applications, he added.
Conventional cameras use the ball lens to get a wide field of view necessary to track waving arms, but Rambus' chip has a 140-degree field of view that's up to the challenge, Gill said.
The diffraction gratings can be manufactured separately with conventional chipmaking equipment. Many of them can be packed onto the same sort of silicon crystal wafer that's used to make processors, Gill said. A wafer with the diffraction gratings could then be sandwiched onto a corresponding wafer of image sensor chips out of which the mini-cameras would be cut.
The camera chip consumes very little power -- less than milliwatt. That means a battery-powered device -- a sensor or a security camera, for example -- could rely on the chip to alert it to a visually detectable change.
Wearable computing devices are one possible market, said J. James Tringall, Rambus' "master envisioneer."
Today's smartwatches don't have much battery life. But using Rambus chip technology, a smartwatch could stay switched off most of the time. If the power-sipping optical system detected somebody looking at it, it could then switch on the rest of the system.
"Instead making you touch the screen or wiggle your wrist just right, it could detect whether eyes are looking at it," Tringall said. "That enables aggressive power management."