X

Robo Brain is learning from the internet

Cornell University is creating a repository of robot knowledge to be shared among robots using YouTube videos, images and manuals from the internet.

Michelle Starr Science editor
Michelle Starr is CNET's science editor, and she hopes to get you as enthralled with the wonders of the universe as she is. When she's not daydreaming about flying through space, she's daydreaming about bats.
Michelle Starr
2 min read

robobrain.jpg
Cornell University

As of July, the internet contained some 3.32 billion indexed pages. It's a pretty massive place and, while much of it is taken up with pictures of kittens and other less elucidating material, you can also hit up the net any time you want to learn a new skill; swing dancing, for instance, or how to open a champagne bottle with a sword.

And, as it turns out, it may also be a good way to teach a robot tasks and recognition. Last month, Cornell University turned on its Robo Brain project, described as "a large-scale computational system that learns from publicly available internet resources, computer simulations, and real-life robot trials".

Even as you read these words, the robot is in the process of downloading one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, as well as previous training Cornell researchers gave to other robots in their laboratories. By studying these materials, the Robo Brain will learn how to recognise objects and how they are used, as well as human language and behaviour -- and it will be able to pass this knowledge on to other robots.

"Our laptops and cell phones have access to all the information we want," explained computer science assistant professor Ashutosh Saxena. "If a robot encounters a situation it hasn't seen before it can query Robo Brain in the cloud."

If a robot sees a mug, for instance, it can learn from Robo Brain to recognise the coffee mug, that it is used to hold liquids, that it can be carried by the handle and that it needs to be held upright when full, so as to avoid spillage, but can be tipped when it is empty, such as when it is being carried to and from the dishwasher.

It can also contain layers of abstraction, a system the researchers call "structured deep learning". For example, if the robot sees an armchair, it knows that it is classed as furniture, and more specifically, that it is furniture used for sitting -- a sub-class that contains a wide range of chairs, stools, benches and couches.

This information will then be stored in what mathematicians call a Markov model, represented as a series of points ("nodes") connected by lines ("edges"), like a giant branching graph, where each state depends on the previous states.

The nodes could be actions, objects, or parts of an image, and each one is assigned a probability, or a level of variance while remaining correct. A key, for example, can vary in form, but still usually consists of a handle, a shaft and teeth. The robot can then follow a chain and look for nodes that match within probability limits.

The project is currently available to view on the official Robo Brain website, where users can help by upvoting correct actions and objects, and leaving comments for the researchers.