Google Research goes Asimov, spelling out concrete, real-world questions to ask in order to develop non-apocalyptic artificial intelligence.
Chris Olah at Google Research has, in a blog post on Tuesday, spelled out the five big questions about how to develop smarter, safer artificial intelligence.
The post came alongside a research paper Google released in collaboration with OpenAI, Stanford and Berkley called Concrete Problems in AI Safety. It's an attempt to move beyond abstract or hypothetical concerns around developing and using AI by providing researchers with specific questions to apply in real-world testing.
"These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems," said Olah in the blog post.
The five points are:
Google has made no secret about its commitment to AI and machine learning, even having its own dedicated research branch, Google DeepMind. Earlier this year, DeepMind's learning algorithm AlphaGo challenged (and defeated) one of the world's premier (human) players at the ancient strategy game Go in what many considered one of the hardest tests for AI.