This is a fully functional prototype that we have developed to advance some of these technologies.
Internally we call it half dome.
What you're seeing here is the integration of very focal technology.
Think about it like the moving lenses in the autofocus function in cameras.
To provide the same level of focus, we in VR move the screen depending on what you're looking at.
[BLANK_AUDIO]
This solution gives you visual comfort, clarity, and as you can see, up close sharpness.
[BLANK_AUDIO]
[APPLAUSE]
We have also optimized the mechanical design.
Despite having the screens moving inside of the headset, you don't notice noise or vibrations.
And for a compelling visual experience it has a 140 degrees fillaview.
You can see the difference.
[APPLAUSE]
The added bonus, our continued innovation in lenses has allowed us to park all this new technology and still keep the rift four factor.
And wait.
Pretty exciting?
[BLANK_AUDIO]
[APPLAUSE]
Let's talk now about 3D reconstruction.
VR can be magical, can take you to space or to the deepest oceans.
Can take you to front row at Fashion Week or to a pit in Formula One race if that's where you really want to be.
But for many of us, the most evocative, meaningful places are often more personal.
Your home, your parents' home, your prefered location is part.
You may want to bring those familiar places into yourself.
Capturing the world, reconstructing it in 3D and sharing it with others.
We are working in improving 3D reconstruction in two ways.
First, by making it more accessible.
So it's not only the result of expensive equipment.
Or professional artistry.
And second, by increasing the fidelity of what we capture and render.
[BLANK_AUDIO]
We saw yesterday a way to bring your environment into VR with a point cloud reconstruction.
This demo was built using traditional, computational photogrammetry.
It can be captured using pictures or videos from any camera.
And now, thanks to our research engineers, we have created another way to do this.
We take a burst of images, a regular panorama from any phone with dual camera.
From those, we take image pairs, one from each camera, with the depth information Our algorithm calculates the consistent depth, and it stitches them together.
It generates a new panorama in 3D.
We collect them [LAUGH] Pretty cool.
[APPLAUSE]
We collect them at the rate of
One image per second and we process them really fast in even less time than took to capture.
The result is geometry that is highly detailed, a 3D panorama offering a more immersive experience that you can enjoy in VR.