(Credit: Flaming June by Frederic Leighton, oil on canvas, 1895)
Japanese scientists have developed an algorithm that is able to predict what a dreamer sees from their neurological patterns.
At the ATR Computational Neuroscience Laboratories in Kyoto, Japan, Yukiyasu Kamitani and his colleagues have spent a long time trying to assemble the data they need to image a sleeper's dreams on a screen — and it looks like they might be nearly there.
Using functional magnetic resonance imaging (fMRI), which examines the flow of blood in the brain to monitor activity, the team has managed to create an algorithm that can accurately display in real time what images are appearing in a dream. This is the first time, it is believed, that objective data has been collected from dreams.
Except it's a little more complicated than that. The study is predicated on the idea that our brains repeat activity when repeating thoughts; for example, every time you think about a cat, your brain will behave in the same, or a similar, way. This idea is seen in a 2011 experiment from the University of California that accurately imaged a person's thoughts as they watched film trailers.
Three test subjects took part in the research, sleeping for three-hour blocks in an MRI machine while attached to an EEG machine, which monitored the electrical activity in the brain. As the subjects drifted into Stage 1 non-REM sleep, their brains exhibited activity; the scientists would wake them up and ask them what they had seen. The process was repeated nearly 200 times for each subject over the course of 10 days.
After this stage, the scientists gathered a collection of images from the web that correlated with the 20 most common categories of images seen by each subject, for example, buildings or people. They showed these images to the subjects while they were awake, still monitoring brain activity, to see if their brains responded the same way to the images both asleep and awake.
In this way, the scientists were able to glean a rough translation of each subject's brain activity, and fed that data into a learning algorithm that could refine its accuracy based on further data. When the subjects were once again connected, sleeping, to the MRI machine, the algorithm scanned their brain activity, producing visualisations. As it turned out, it was only correct 60 per cent of the time — a number that Kamitani believes is significant, since it is too high to be chance.
This differs from professor Jack Gallant's University of California study, in that Gallant's study was only able to reconstruct images from brain patterns after the subject had already seen them. It also brings us a significant step closer to not only understanding what happens when we sleep, but also being able to possibly see inside the minds of coma patients, and control computers with our brains.
The full results of the study can be seen, behind a pay wall, in Science magazine.