X

MIT discovery may improve robotic eyes

How the brain determines texture could improve artificial intelligence visual recognition systems.

Candace Lombardi
In a software-driven world, it's easy to forget about the nuts and bolts. Whether it's cars, robots, personal gadgetry or industrial machines, Candace Lombardi examines the moving parts that keep our world rotating. A journalist who divides her time between the United States and the United Kingdom, Lombardi has written about technology for the sites of The New York Times, CNET, USA Today, MSN, ZDNet, Silicon.com, and GameSpot. She is a member of the CNET Blog Network and is not a current employee of CNET.
Candace Lombardi
3 min read
How does your brain visually tell the difference between spilled milk and spilled salt?

It may be calculating the patterns of light and dark spots, according to researchers from the Massachusetts Institute of Technology and the NTT Communications Science Labs in Japan.

It's not exactly known how the human brain represents visual information, but some believe it operates like a digital camera with a really sophisticated computer, said Lavanya Sharan, a member of the perceptual science group in brain and cognitive sciences at MIT.

The electrical engineering and computer science graduate student co-authored the paper "Image statistics and the perception of surface qualities," which will appear in the April 18 issue of Nature.

The paper argues that the brain takes a digital snapshot and then analyzes the bright and light spots to determine texture and, subsequently, what type of material it's looking at, in addition to taking in information on color and shape.

"Practical applications of this work would extend to domestic robots or autonomous vehicles that could understand the world they look at. But it's also important for understanding how human perception works. How the brain understands the color or the shininess of a surface can shed light on the workings of the visual system, which is a large open question," Sharan said.

"Let's say I am looking at a shiny, black material; because it's shiny and black it will have strong highlights. That highlight will be extremely strong and be a bright region in that image. The brain then measures. If you have more highlights than normal, then it assumes that the surface is black or shiny or both," she said.

The idea that brightness or whiteness represents shine is something artists have long used to illustrate texture in paintings. The classic shiny apple is painted with a white crescent on the part that is supposed to be exhibiting shine.

In life, that part of the apple is not actually white; it's red. But the brain takes notice of this shine as brightness, and uses that information to figure out that the object it sees is shiny.

To analyze this process at a more sophisticated level useful to artificial intelligence, the MIT group plotted the process on what Sharan called a "luminosity histogram."

The x-axis measured the different intensities of light seen by the brain; the y-axis plotted the number of points sharing a common intensity value. Think of it as counting the number of white, gray or black pixels in a single black-and-white digital image, only in this case a lot more pixel types are recorded.

From the histograms, the team determined that the brain correlates the level of brightness for each point with the number of times that luminosity point shows up to determine if something is shiny, rough or wet.

"We see this work as a stepping stone or the beginning for material perception. People who work in visual perception have so far concentrated on object recognition. But we want to stress that it is not only important to recognize the table, but also what material the table is made of," Sharan said.