X

Google's five rules for AI safety

Google Research goes Asimov, spelling out concrete, real-world questions to ask in order to develop non-apocalyptic artificial intelligence.

Luke Lancaster Associate Editor / Australia
Luke Lancaster is an Associate Editor with CNET, based out of Australia. He spends his time with games (both board and video) and comics (both reading and writing).
Luke Lancaster

Chris Olah at Google Research has, in a blog post on Tuesday, spelled out the five big questions about how to develop smarter, safer artificial intelligence.

The post came alongside a research paper Google released in collaboration with OpenAI, Stanford and Berkley called Concrete Problems in AI Safety. It's an attempt to move beyond abstract or hypothetical concerns around developing and using AI by providing researchers with specific questions to apply in real-world testing.

"These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems," said Olah in the blog post.

The five points are:

  • Avoiding Negative Side Effects: AI shouldn't disturb its environment while completing set tasks
  • Avoiding Reward Hacking: AI should complete tasks properly, rather than using workarounds (like a cleaning robot that covers dirt with material it doesn't recognise as dirt)
  • Scalable Oversight: AI shouldn't need constant feedback or input to be effective
  • Safe Exploration: AI shouldn't damage itself or its environment while learning
  • Robustness to Distributional Shift: AI should be able to recognise new environment and still perform effectively in them

Google has made no secret about its commitment to AI and machine learning, even having its own dedicated research branch, Google DeepMind. Earlier this year, DeepMind's learning algorithm AlphaGo challenged (and defeated) one of the world's premier (human) players at the ancient strategy game Go in what many considered one of the hardest tests for AI.