Microsoft's Kinect: A robot's low-cost, secret weapon
Robotics engineers are using the Kinect motion-sensing controller, which lets people play video games hands-free, as cheap sensors to help robots "see" their surroundings and operate autonomously.
CAMBRIDGE, Mass.--As robots seek to mimic humans' ability to see and hear, they have a secret weapon in Microsoft's Kinect game motion-sensing controller.
MIT's Computer Science and Artificial Laboratory (CSAIL) at the Massachusetts Institute of Technology, which I toured Friday, is piled high with all kinds of hardware, including laptops, unmanned submarines, and mechanical limbs. But when it comes to equipping robots with artificial eyes and ears, robotics hackers are clearly enamored with the Kinect motion-sensing controller and sensors like it.
The Kinect motion-sensing controller is attached to the head of the humanoid PR2 robot as it tries to bake cookies. It's also attached to a robotic wheelchair, as well as unmanned vehicles for exploring the ocean and the air. For robot builders, Kinect's depth camera provides a relatively cheap set of eyes--crucial to giving them more autonomy--that plug in nicely to onboard computers.
"Kinect costs $150 and replaces $7,000 in sensors," said mechanical engineering student Mario Bollini. And plugging the control into a robot--Bollini is working with Willow Garage's PR2 robot--and writing software for it is straight-forward, he said.
In another effort, the Kinect motion-sensing controller is attached to a wheelchair to improve automated navigation. Researchers are writing algorithms that would allow a person to teach the wheelchair the ins and outs of a nursing home by following a person around or taking voice commands.
The depth camera of Kinect can also be used to navigate environments where robots can't take advantage of GPS. The Robust Robotics Group at MIT and a team at the University of Washington have equipped a quadrotor, which is a four-propeller helicopter, with a Kinect motion-sensing controller to create a three-dimensional map of a location, which could be a building post-earthquake.
As the system flies around, the Kinect sensor sends out an infrared beam and, based on the reflections, can start to build, point by point, a colored map of an indoor or outdoor space in software. The cameras also allow the quadrotor to avoid colliding into other objects.
All that sensor data requires some hefty onboard processing. The Robust Robotics Group's machine, which is about as wide as a pizza box, has two computers, including one that's about as powerful as a laptop processor, according to a researcher.
Giving robots a better way to understand their environment with off-the-shelf products is helping lead to more capable robots.said because of its low cost and capabilities, the sensor in the Kinect controller is "incredibly disruptive" because of its consumer electronics price.
For its part, Microsoft is trying to attract more developers to use Kinect for robotic applications and is upgrading the hardware so that it can better "see" very close objects rather than have to rely on a separate sensor.
"Microsoft will continue researching even better Kinect hardware. This means that 3D depth data is now here to stay, so sharpen up your 3D geometry skills and get cracking on applications that take full advantage of these new devices," said Trevor Taylor, program manager for Microsoft Robotics in a recent blog.