Image recognition is a complicated business. For Google, that means an artificial neural network -- software capable of learning.
The software is based on the structure of biological brains, and it is trained by being shown millions of images. It constantly adjusts until it is able to accurately recognise, say, a schnauzer or a stove. Information will filter from neuron layer to neuron layer until it reaches the final layer and delivers its response.
Giving this neural network the ability to recognise images is only a short step from giving it the tools to generate them. And, as it turns out, letting the network generate images can be hugely useful.
"We train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (for example, a fork needs a handle and 2-4 tines), and learn to ignore what doesn't matter (a fork can be any shape, size, colour or orientation)," wrote Google Research's software engineering team in a blog post.
"But how do you check that the network has correctly learned the right features? It can help to visualise the network's representation of a fork."
This way, when the neural network returns an image that is somehow incorrect, the team can adjust the neural network's parameters. The example used was a dumbbell, in which the neural network's results included a muscled arm wielding the dumbbell. It was corrected by removing the arm.
Where it gets really fun is when the neural network is fed an image and asked to search for small subtle things. The network finds images where the human eye does not.
"We just start with an existing image and give it to our neural net. We ask the network: 'Whatever you see there, I want more of it!'" the team explained.
"This creates a feedback loop: If a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere."
The results when this happens are nothing short of spectacular. A personal favourite is Edvard Munch's "The Scream," made creepier with eyes (and dogs, for some reason). The artificial neural network was trained mainly on animal images, so expect to see a lot of dogs and fish and lizards and birds.
Click through the gallery above to see inside a machine's dreams.