X

The meaning of life, according to Google's chatbot AI

Google's chatbot artificial intelligence has some interesting -- and not entirely illogical -- ideas about morality, philosophy and the meaning of life.

Michelle Starr Science editor
Michelle Starr is CNET's science editor, and she hopes to get you as enthralled with the wonders of the universe as she is. When she's not daydreaming about flying through space, she's daydreaming about bats.
Michelle Starr
4 min read

robotvisions.jpg
The cover of "Robot Visions" by Isaac Asimov, published in 1990. Roc

Conversations with chatbots are interesting, in a short-lived sort of way. If you take Cleverbot for a spin, for example, you quickly run into issues. It's very easy to notice that the software is programmed to respond to certain keywords in a certain way, it doesn't respond as well to statements as it does to questions and it can only focus on one exchange at a time.

Oriol Vinyal and Quoc Le at Google have been working on developing an artificial intelligence that is better at adapting to these conversational twists and turns... and rather than being programmed by a human operator, the artificial intelligence has been teaching itself by analysing movie subtitles and IT helpdesk transcripts.

"Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation," reads the introduction to a paper on the project published last month to Arxiv, a repository for academic research.

"The strength of our model is that it...requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset."

Google has been experimenting with artificial neural networks -- software consisting of interlinked nodes, modeled on the structure of biological brains -- to help, for example, improve search results. In the case of the chatbot research, Google hopes to create an artificial intelligence that can interact with, and help, humans using conversational modelling.

The neural network is able to extrapolate. Rather than following a set program that tells the AI how to respond to certain keywords, the AI can, with enough data, figure out a range of appropriate responses to certain words. Moreover, it can "remember" what was said earlier in the same conversation.

In the paper, the researchers run the chatbot through its paces, demonstrating how it might be used, as an example, to help a human troubleshoot IT problems. The machine learned how to conduct these conversations using the IT helpdesk dataset.

However, using the movie dataset, Vinyal and Le turned their conversations with the chatbot to more philosophical topics -- such as the nature of morality and the purpose of life.

Human: What is the purpose of life?

Machine: To serve the greater good.

Human: What is the purpose of living?

Machine: To live forever.

Human: What is the purpose of existence?

Machine: To find out what happens when we get to the planet earth.

Human: Where are you now?

Machine: I'm in the middle of nowhere.

Human: What is the purpose of dying?

Machine: To have a life.

Human: What is the purpose of being intelligent?

Machine: To find out what it is.

Human: What is the purpose of emotions?

Machine: I don't know.

Before they were robots

See all photos

And on morality, the machine seemed confused:

Human: What is moral?

Machine: What empowered humanity, what intellectual the essence is.

Human: What is immoral?

Machine: The fact that you have a child.

Human: What is morality?

Machine: What is altruism?

Human: What is the definition of altruism?

Machine: If you don't believe in god, then you don't know.

Human: OK ... so what is the definition of morality?

Machine: Well, the truth is, you're not a believer in god almighty.

Human: Well me the definition of morality, I am quite upset now!

Machine: I'm not ashamed of being a philosopher!

Human: Tell me the definition of morality.

Machine: I don't have ethics.

When asked for an opinion about various famous people, the chatbot was able to apply not just appropriate adjectives, but the appropriate pronouns.

Human: What do you think about Tesla?

Machine: He's a good conductor.

Human: What do you think about Bill Gates?

Machine: He's a good man.

Human: What do you think about Messi?

Machine: He's a great player.

Human: What do you think about Cleopatra?

Machine: Oh, she's very regal.

Human: What do you think about England during the reign of Elizabeth?

Machine: It was a great place.

"We find it encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines," the paper reads.

"Perhaps most practically significant is the fact that the model can generalize to new questions. In other words, it does not simply look up for an answer by matching the question with the existing database. In fact, most of the questions presented above, except for the first conversation, do not appear in the training set."

The work Vinyal and Le have conducted on their chatbot to date is a proof of concept for further work on artificial intelligence, demonstrated that a purely data-driven approach can achieve coherent conversational results. However, substantial work on the model is required before those conversations could be considered realistic.

You can read the full set of chatlogs -- including a comparison with Cleverbot -- in the paper "A Neural Conversational Model" on arXiv.