Google's AlphaGo Zero destroys humans all on its own

The new artificial neural network taught itself to master the ancient game Go within weeks, without any tips from humans.

Eric Mack Contributing Editor
Eric Mack has been a CNET contributor since 2011. Eric and his family live 100% energy and water independent on his off-grid compound in the New Mexico desert. Eric uses his passion for writing about energy, renewables, science and climate to bring educational content to life on topics around the solar panel and deregulated energy industries. Eric helps consumers by demystifying solar, battery, renewable energy, energy choice concepts, and also reviews solar installers. Previously, Eric covered space, science, climate change and all things futuristic. His encrypted email for tips is ericcmack@protonmail.com.
Expertise Solar, solar storage, space, science, climate change, deregulated energy, DIY solar panels, DIY off-grid life projects. CNET's "Living off the Grid" series. https://www.cnet.com/feature/home/energy-and-utilities/living-off-the-grid/ Credentials
  • Finalist for the Nesta Tipping Point prize and a degree in broadcast journalism from the University of Missouri-Columbia.
Eric Mack
2 min read

Google's new artificial intelligence can defeat both humans and other AIs. Fortunately, the only battlefield where it fights and wins is the ancient board game Go.  

AlphaGo Zero, developed by Google-owned DeepMind, is the latest iteration of the AI program. The original AlphaGo defeated Go master Lee Sedol last year, and AlphaGo Master, an updated version, went on to win 60 games against top human players. What's different about AlphaGo Zero is that it became arguably the world's best Go player without any help from humans. 

Knowing only the basic rules of the game, AlphaGo Zero taught itself to beat those earlier versions of the program, which had studied the strategies of human masters as initial input. 

"AlphaGo Zero doesn't use any human data whatsoever," explained lead researcher David Silver in video posted last week. "Instead what it has to do is learn by itself completely from self-play."


The program started off knowing only the basic rules and then played millions of games against itself in just a few days. It updated the neural network that powers it as it went. 

After almost five million games played against itself, AlphaGo Zero could outplay humans and the original AlphaGo. After 40 days, it was capable of beating AlphaGo Master.

A paper detailing AlphaGo Zero's development was published in Thursday's issue of the journal Nature.

The program learned the strategies humans accumulated over thousands of years in a matter of weeks and also developed unconventional strategies and moves that surpass the techniques of the human masters, leaving the pros stunned.

"At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic," wrote Andy Okun and Andrew Jackson of the American Go Association in an editorial accompanying the study in Nature.

DeepMind says it has plans for the technology behind AlphaGo Zero beyond just kicking butt all over an ancient game board.

"Ultimately we want to harness algorithmic breakthroughs like this to help solve all sorts of pressing real world problems like protein folding or designing new materials," said Demis Hassabis, co-founder and CEO of DeepMind, in a statement.  

That sounds great, but just as a precaution let's take the advice of Elon Musk and Stephen Hawking and keep any super-fast learning AI away from the nuclear launch codes for now.

Technically Literate: Original works of short fiction with unique perspectives on tech, exclusively on CNET.

Crowd ControlA crowdsourced science fiction novel written by CNET readers.