So what does it take to do virtual reality right?
As it turns out, virtual reality doesn't have to replicate the real world in order to work.
It just has to provide the right inputs in order to satisfy the relevant sensors and drive, drive the correct inference.
To paraphrase Morpheus, some rules can be bent.
Others can be broken.
For example, many of you will get to experience the demo of the latest iteration of the Rift, Crescent Bay during F8.
When you do, I'm confident that most of you will feel like you've been teleported into a virtual world, like you're literally there.
And yet when you're using crescent bay you're looking at photons coming from pixel strobed at 90 hertz rather than photons continuous arriving from real world surfaces and lights.
That works because the periods during which the receptive fields on the retina accumulate photons is about 20 milliseconds.
And the structure of those fields is such that they'll respond just as well to the photons arriving in several short bursts, as they do to them arriving continuously from real world surfaces.
Drop the frame rate to 60 hertz, however, and those same receptive fields will start to detect quicker.
Drop it to 10 hertz, and motion will cease to appear smooth.
IN fact, present day display system differs from the real world in many ways.
But by taking advantage of the physiology of the visual system, it none the less produces the desired signals to the brain.
So part of making virtual reality work is learning how our sensors can and can't be driven to send the right signals to the brain.
The other, less obvious part, is learned what it takes to get the brain to make a desired inference.
For example, remember this?
We were able to get the brain to infer in the corresponding head movement by leveraging the convexity of something.
The same sort of thing is required in a general sense for VR to work, and the key to that is agreement between multiple sensors and our internal model of reality, especially when it involves feedback loops.
For many people, the defining moment in the Crescent Bay demo is finding themselves on the edge of a long drop.
The response is often to grab a nearby virtual pipe.
Which shows that enough unconscious inference has kicked in to trigger the automatic responses to mean the, means the brain believes it's some place real.
Why does that happen?
While the scene we're kneeled, climbs out onto a ledge outside a building.
And looks down a thousand feet never makes anyone grab for support.
The difference is, that when you move your head in Crescent Bay, both the motion your vestibular organs report and the parallax or visual system report match the motor signals you sent your neck muscles.
And your sense of proprioception.
That is, your model of the position of your body.
And they do so with low enough latency so that your brain can fuse all those data points into a coherent model of the world.
At the core if this is a feedback loop.
You move your head and what you see changes correctly and with it almost imperceptible delay.
Allowing your brain to maintain the same kind of model of the virtual world as it does for the real world.
Which then leads smoothly to further motion.
That's the fundamental difference between movies and VR.
Movies provide similar images, but with no feedback loop for head motion.
As a result, we perceive movies as moving pictures on a flat surface, still firmly embedded in the real world.
Good VR, in contrast, isn't perceived as pictures at all.
It replaces, rather than augments, the real world.
VR is about driving our perceptions.
The way they're built to be driven.
If we can do that, it should be completely unsurprising that it enables us to create new realities.
If the technology becomes good enough, VR should theoretically be able in the limit, to create not only any experience that's possible in the real world, but any experience that we're capable of having