X

Does schmoozing make robots clever?

Yes, says a Sony scientist, who says teaching robots to interact socially will allow them to develop their "minds"--just like people do. It could leave Aibo by the wayside.

Matthew Broersma Special to CNET News
4 min read
A Belgian professor doing research for Sony wants to teach robots to be more like people--but he's running into some resistance.

Luc Steels, a professor at the University of Brussels and director of Sony's Computer Science Laboratories in Paris, believes that the breakthrough that will take robots beyond the Aibo stage will come from allowing them to express themselves--through interaction and through forming their own languages and even "cultures"--rather than from focusing strictly on how individual machines behave.

If Steels' theories gain the upper hand, it would mean a new direction for the robotics field. Today, development tends to focus on machines that have increasingly complex behaviors and can learn new behaviors. In contrast, Steels sees robots progressing by learning to form concepts they can swap with other robots, thereby developing their own "minds," just as humans do.

"We're not going to get very far with robots if we don't focus on social interactions," Steels said. His ideas are outlined in a recent paper, "Evolving and Sharing Representations through Situated Language Games," presented on Thursday at the conference "Biologically Inspired Robots" in Bristol, England.

Steels' most ambitious projects to date have involved thousands of software agents (bits of code) transporting themselves across the Internet to control robots in different cities across the world. The agents were "taught" to associate certain words with objects seen by the robots and then to use these words to interact with other agents. The spread of words and meanings among the agents follows similar patterns to those found in human culture.

Steels also works with Sony robots such as Aibo and an upcoming bipedal device, adding custom software to enable them to play word games.

An alternative to Turing
Steels' work deals with machine intelligence, but it's a fundamentally different view from that embodied in the famous "Turing test."

According to the Turing theory, a human-like intelligence has successfully been created when a human can't tell the difference between a conversation with the artificial intelligence and a real one.

"I think the Turing test is a bad idea because it's completely fake," Steels said. "It's like saying you want to make a flying machine, so you produce something that is indistinguishable from a bird. On the other hand, an airplane achieves flight but it doesn't need to flap its wings."

Similarly Steels believes that machines can evolve intelligence through interaction with one another and with their ecology--but this synthetic intelligence it is unlikely to bear much superficial resemblance to human intelligence.

In one sense, Steels jokes, the Turing test has already been passed--by the Aibo. He demonstrated with a video clip where an Aibo approached a dog eating a piece of meat and was treated just like another dog--it was attacked.

He noted that while entertainment robots can interact with humans--and particularly children--through the use of emotional signals, they don't have their own interior lives. "They are like actors that express emotions but don't have the emotion themselves," he said.

However, Aibo-type machines can still be seen as the direct descendants of the wheeled "tortoises" developed by W. Grey Walter in the 1940s and 1950s. Steels built such robots using digital technology and Lego sets in the early 1990s, but in search of the next step turned to the linguistic concept of "representations." For example, a street can be blocked off physically with a roadblock, but a "no entry" sign is a representation that carries the same weight. Representations are closely tied not only to social interaction but to the functions of the brain.

Robotic resistance
This notion has met with resistance on both theoretical and practical levels. Some scientists, such as Rodney Brooks of MIT, have argued that intelligent behavior doesn't need internal representations. And at this week's conference, other attendees expressed disbelief, saying that today's cameras and visual software make it impractical for robots equipped with these to carry out any real level of interaction with the world.

Steels believes that technology is no constraint. "We don't need the full complexity of human vision, this can be built on any kind of sensory foundation," he said.

As for the theoretical argument, he believes that sooner or later the field will have to stop modeling robots on an unrealistically limited view of humanity.

"There is a danger in the field of viewing humans as machines, as automata, the way biology looks at humans as complex machines," he said. "Representation-making gives a rich view of people that is not covered by these behaviorist theories."

ZDNet U.K.'s Matthew Broersma reported from London.