First the Rubik's Cube. Next? A bigger Rubik's Cube probably.
OK, let's break this down.
The Rubik's Cube is pretty difficult, right? But you'd imagine it might be pretty easy for an artificial intelligence to break down and solve consistently, right?
Creating an algorithm that can solve the Rubik's Cube is relatively simple -- the kind of algorithms that allow AI to beat humans at chess or Go or even DOTA 2! But creating a machine that can solve the Rubik's Cube without algorithms hand-crafted by human beings? That's a completely different task.
Stephen McAleer and his colleagues at the University of California think they have solved the problem, with a process called "autodidactic iteration".
Autodidactic iteration: McAleer and his team call it a "novel reinforcement learning algorithm that is able to teach itself how to solve the Rubik's Cube with no human assistance." They claim that this learning algorithm can solve 100 percent of randomly scrambled Rubik's Cubes in 30 moves or less -- which is equal to, or better than, human performance.
There's a difference between this type of algorithm and the algorithm's that create superhuman performances in games like chess. Those systems are "reinforced learning" systems, systems which struggle to solve games like the Rubik's Cube which has, says McAleer's team "a high number of states and a small number of reward states".
Autodidactic iteration works backward to solve the cube. It starts with the finished cube and works backward to see if each proposed move is an improvement.
Sounds complicated, and it is. But it's a system with potential to evolve what artificial intelligence can do with broader, more difficult problems like protein folding. For now McAleer and his team are experiment with bigger, more difficult-to-solve cubes.
You can read the full paper here.
Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.
Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.