X

Behind the Doors of Meta's Top-Secret Reality Labs

My visit to a future that isn't quite here yet: neural wristbands, ghostly 3D audio and lifelike avatars. Plus, the new Meta Quest Pro.

Scott Stein Editor at Large
I started with CNET reviewing laptops in 2009. Now I explore wearable tech, VR/AR, tablets, gaming and future/emerging trends in our changing world. Other obsessions include magic, immersive theater, puzzles, board games, cooking, improv and the New York Jets. My background includes an MFA in theater which I apply to thinking about immersive experiences of the future.
Expertise VR and AR, gaming, metaverse technologies, wearable tech, tablets Credentials
  • Nearly 20 years writing about tech, and over a decade reviewing wearable tech, VR, and AR products and apps
Scott Stein
11 min read
Meta

Mark Zuckerberg sat across from me, controlling objects on a screen with small motions of his fingers. Taps, glides, pinches. On his wrist was a chunky band that looked like an experimental smartwatch: It's Meta's vision of our future interactions with AR, VR, computers and just about everything else.

"It'll work well for glasses…I think it'll actually work for everything. I think in the future, people will use this to control their phones and computers, and other stuff…you'll just have a little band around your wrist," Zuckerberg said, right before he demoed the neural wristband. His hand and finger movements seemed subtle, almost fidgety. Sometimes nearly invisible.

Neural input devices are just one part of Meta's strategy beyond VR, and these wristbands were among the tech I got to see and try during a first-ever visit to Meta's Reality Labs headquarters in Redmond, Washington. The trip was the first time Meta's invited journalists to visit its future tech research facility, located in a handful of nondescript office buildings far north of Facebook's Silicon Valley headquarters. 

Entrance to a red building

Entering Meta Reality Labs in Redmond, Washington.

Scott Stein/CNET

The last time I visited Redmond, I was trying Microsoft's HoloLens 2. My trip to Meta was a similar experience. This time, I was demoing the Meta Quest Pro, a headset that blends VR and AR together into one device and aims to kick off Zuckerberg's ambitions to a more work-focused metaverse strategy. 

Meta's newest Connect conference news is focused on the Quest Pro, and also on new work partnerships with companies like Microsoft, Zoom, Autodesk and Accenture, targeting ways for Meta to maybe dovetail with Microsoft's mixed reality ambitions.

I also got to look at a handful of experimental research projects that aren't anywhere near ready for everyday use but show glimpses of exactly what Meta's shooting for next. These far-off projects, and a more-expensive Quest Pro headset, come at a strange time for Meta, a company that's already spent billions investing in the future of the metaverse, and whose most popular VR headset, the Quest 2, still has less than 20 million devices sold. It feels like the future isn't fully here yet, but companies like Meta are ready for it to be.

I experienced a number of mind-bending demos with a handful of other invited journalists. It felt like I was exploring Willy Wonka's chocolate factory. But I also came away with the message that, while the Quest Pro looks like the beginning of a new direction for Meta's hardware, it's nowhere close to the end goal.

Researchers wearing prototype wristbands controlling a video game

A demo of EMG wristbands measuring motor neurons, at Meta Reality Labs Research

Meta

Neural inputs: Wristbands that adapt to you

"Co-adaptive learning," Michael Abrash, Meta's Reality Labs' chief scientist, told me over and over again. He was describing the wristbands that Meta has discussed multiple times since acquiring CTRL-Labs in 2019. It's a hard concept to fully absorb, but Meta's demo, shown by a couple of trained researchers, gave me some idea of it. Wearing the bulky wristbands wired to computers, the wearers moved their fingers to make a cartoon character swipe back and forth in an endless-running game. Then, their movements seemed to stop. They became so subtle that their hands barely twitched, and still they played the game. The wristbands use EMG, or electromyography (the electrical measurement of muscles) to measure tiny muscle impulses.

A feedback-based training process gradually allowed the wearers to start shrinking down their actions, eventually using only a single motor neuron, according to Thomas Reardon, Reality Labs' Director of Neuromotor Interfaces and former CEO of CTRL-Labs, who talked us through the demos in Redmond. The end result looks a little like mind reading, but it's done by subtly measuring electrical impulses showing an intent to move. 

Mark Zuckerberg demos a neural input wristband with a computer

Mark Zuckerberg using the EMG wristband in a demo in front of a handful of journalists during my visit. 

Meta

When Zuckerberg demonstrated the wristband, he used a similar set of subtle motions, though they were more visible. The wristband's controls feel similar to a touch-based trackpad or air mouse, able to identify pressure-based pinches, swipes and gestures.

"In the long run, we're going to want to have an interface that is as natural and intuitive as dealing with the physical world," Abrash said, describing where EMG and neural input tech is aiming. 

Typing isn't on the table yet. According to Zuckerberg, it would require more bandwidth to get to that speed and fidelity: "Right now the bit rate is below what you would get for typing quickly, but the first thing is just getting it to work right." The goal, at some point, is to make the controls do more. Meta sees this tech as truly arriving in maybe five to six years, which feels like an eternity. But it will likely line up, should that timeframe hold, with where Meta sees its finalized AR glasses becoming available.

Someone wearing a neural input wristband

The EMG wristband looks like a large prototype smartwatch, with sensors in the segmented strap.

Scott Stein/CNET

Zuckerberg says the wristbands are key for glasses, since we won't want to carry controllers around, and voice and hand tracking aren't good enough. But eventually he plans to make these types of controls work for any device at all, VR or otherwise. 

The controls look like they'll involve an entirely different type of input language, one that might have similarities to existing controls on phones or VR controllers, but which will adapt over time to a person's behavior. It seems like it would take a while to learn to use. 

"Most people are going to know a whole lot about how to interact in the world, how to move their bodies," Reardon said to me. "They're going to understand simple systems like letters. So let's meet them there, and then do this thing, this pretty deep idea called co-adaptation, in which a person and a machine are learning together down this path towards what we would call a pure neural interface versus a neural motor interface, which blends neural decoding with motor decoding. Rather than saying there's a new language, I'd say the language evolves between machine and person, but it starts with what people do today."

Demonstrating an EMG wristband in front of a computer screen showing feedback

A demonstration showing how feedback can lead to the wristband sensing smaller and smaller motions.

Meta

"The co-adaptation thing is a really profound point," Zuckerberg added. "You don't co-adapt with your physical keyboard. There's a little bit of that in mobile keyboards, where you can misspell stuff and it predicts [your word], but this is a lot more."

I didn't get to wear or try the neural input wristband myself, but I got to watch others using them. Years ago at CES, I did get to briefly try a different type of wrist-worn neural input device for myself, and I got a sense of how technologies like this actually work. It's different from the head-worn device by Nextmind (since acquired by Snap) I tried a year ago, which measured eye movement using brain signals. 

The people using the Meta wristbands seemed to make their movements easily, but these were basic swiping game controls. How would it work for more mission-critical everyday use in everyday AR glasses? Meta's not there yet: According to Zuckerberg, the goal for now is to just get the tech to work, and show how adaptive learning could eventually shrink down response movements. It may be a while before we see this tech in action on any everyday device, but I wonder how Meta could apply the principles to machine learning-assisted types of controls that aren't neural input-based. Could we see refined controllers or hand tracking combinations arrive before this? Hard to tell. But these bands are a far-off bet at the moment, not an around-the-corner possibility.

A man wearing a mask and headphones in a testing room with speakers

I wear a spatially-trackable headset which creates audio effects I can't distinguish from the speakers in the room.

Meta

Super-real 3D audio 

A second set of demos I tried, demonstrating next-generation spatial audio, replicated research Meta talked about back in 2020 -- and which it originally planned on showing off in-person before COVID-19 hit. Spatial audio is already widely used in VR headsets, game consoles and PCs, and on a variety of everyday earbuds such as AirPods. What Meta's trying to do is not just have audio that seems like it's coming from various directions, but to project that audio to make it seem like it's literally coming from your physical room space.

A visit to the labs' soundproof anechoic chamber -- a suspended room with foam walls that block reflections of sound waves -- showed us an array of speakers designed to help study how sounds travel to individual ears, and to explore how sounds move in physical spaces. The two demos we tried after that showed how ghostly-real the sounds can feel.

In a soundproof room with a tower full of speakers

Inside Meta's anechoic chamber, where a massive speaker array is used to help create spatial audio profiles.

Scott Stein/CNET

One, where I sat down in a crowded room, involved me wearing microphones in my ears while the project leads moved around me, playing instruments and making noises at different distances. After 40 seconds of recording, the project leads played back the audio to me with over-ear headphones… and parts of it sounded exactly like someone was moving around the room near me. What made it convincing, I think, were the audio echoes: The sense that the movement was reverberating in the room space. 

A second demo had me wearing a 3D spatial-trackable pair of headphones in a room with four speakers. I was asked to identify whether music I heard was coming from the speakers, or my ears. I failed. The music playback flawlessly seemed to project out, and I had to take off the headphones to confirm which was which as I walked around.

According to Michael Abrash's comments back in 2020, this tech isn't far away from becoming a reality as neural wristbands. Meta's plans are to have phone cameras eventually be able to help tune personal 3D audio, much like Apple just added to its newest AirPods, but with the added benefit of realistic room-mapping. Meta's goal is to have AR projections eventually sound convincingly present in any space: It's a goal that makes sense. A world of holographic objects will need to feel anchored in reality. Although, if future virtual objects sound as convincingly real as my demos were, it might become hard to distinguish real sounds from virtual, which brings up a whole bunch of other existential concerns.

Wearing a VR headset and talking with a head on a computer screen

My conversation with an avatar so realistic it felt like I was in the same room with them.

Meta

Talking to photo-real avatars

I'm in a dark space, standing across from a seemingly candle-lit and very real face of someone who was in Meta's Pittsburgh Reality Labs Research offices, wearing a specially built face-tracking VR headset. I'm experiencing Codec Avatars 2.0, a vision of how realistic avatars in metaverses could get.

How real? Quite real. It was uncanny: I stood close and looked at the lip movement, his eyes, his smiles and frowns. It felt almost like talking with a super-real PlayStation 5 game character, then realizing over and over again this is a real-time conversation with a real person, in avatar form. 

I wondered how good or limited face tracking could be: After all, my early Quest Pro demos using face tracking showed limits. I asked Jason, the person whose avatar I was next to, to make various expressions, which he did. He said I was a bit of a close-talker, which made me laugh. The intimate setting felt like I had to get close and talk, like we were in a cave or a dimly lit bar. I guess it's that real. Eventually, the realism started to feel good enough that I started assuming I was having a real conversation, albeit one with a bit of uncanny valley around the edges. It felt like I was in my own living video game cutscene.

Meta doesn't see this coming into play for everyday headsets any time soon. First of all, standalone VR headsets are limited in their processing power, and the more avatars you have in a room, the more graphics get taxed. Also, the tracking tech isn't available for everyone yet. 

Man with a VR headset, chatting with a realistic avatar on a screen

Trying out a chat with an Instant Codec Avatar, created with a phone-made head scan.

Meta

A more dialed-down version was in my second demo, which showed an avatar that had been created by a face scan using a phone camera using a new technology called Instant Codec Avatars. The face looked better than most scans I'd ever made myself. But I felt like I was talking with a frozen and only slightly moving head. The end result was less fluid than the cartoony Pixar-like avatars Meta uses right now.

Actor being scanned by a room full of cameras

An actor who was 3D scanned ahead of time using an array of cameras. I saw his rendered avatar layered with digital clothing.

Scott Stein/CNET

One final demo showed a full-body avatar (legs, too!) that wasn't live or interactive. It was a premade 3D scan of an actor using a special room with an array of cameras. The demo focused on digital clothes that could realistically be draped over the avatar. The result looked good up close, but similar to a realistic video game. It seems like a test drive for how digital possessions could someday be sold in the metaverse, but this isn't something that would work on any headset currently available.

3D scanning a shoe with a phone

My sneaker gets 3D scanned using Meta's new phone-based capture tech.

Scott Stein/CNET

3D scanning my shoes (plus, super-real cacti and teddy bears)

Like a volunteer in a magic show, I was asked to remove one of my shoes for a 3D scanning experiment. My shoe ended up on a table, where it was scanned with a phone camera -- no lidar needed. About half an hour later, I got to look at my own shoe in AR and VR. 3D scanning, like spatial audio, is already widespread, with lots of companies focused on importing 3D assets into VR and AR. Meta's research is aiming for better results on a variety of phone cameras, using a technology called neural radiance fields. Another demo showed a whole extra level of fidelity.

A tablet showing a 3D image of a shoe

My shoe, after being scanned, in AR.

Scott Stein/CNET

A couple of prescanned objects, which apparently took hours to prepare, captured the light patterns of complex 3D objects. The results -- which showed furry, spiky, fine-detailed objects including a teddy bear and a couple of cacti -- looked seriously impressive on a VR headset. The curly fur didn't seem to melt or matte together like most 3D scans; instead it was fluffy, seemingly without angles. The cactus spines spread out in fine spiky threads.

Of all the demos I tried at the Reality Labs, this was maybe the least wowing. But that's only because there are already, through various processes, lots of impressive 3D-scanned and rendered experiences in AR and VR. It's not clear how instant or easy it could be to achieve Meta's research examples in everyday use, making it hard to judge how effective the function is. For sure, if scanning objects into virtual, file-compatible versions of themselves gets easier, it'll be key for any company's metaverse ambitions. Tons of businesses are already aiming to sell virtual goods online, and the next step is letting anyone easily do it for their own stuff. Again, this is already possible on phones, but it doesn't look as good…yet.

Meta's Michael Abrash stands in front of a wall of VR headsets

Chief Scientist Michael Abrash talks to us in front of a wall of prototype VR and AR headsets.

Meta

What does it all mean?

The bigger question on my mind, as my day ended at Meta's facilities and I called a Lyft from the parking lot, was what it all added up to. Meta has a brand-new Quest Pro headset, which is the bleeding-edge device for mixing AR and VR together, and which offers new possibilities for avatar control with face tracking.

The rest of the future remains a series of question marks. Where Meta wants to spread out its metaverse ambitions is a series of roads that are still unpaved. Neural inputs, AR glasses, blends of virtual and real sounds, objects and experiences? These could still be years away. 

In a year where Meta has seen its revenue drop while making sizable bets on the metaverse, despite inflation and an economic downturn, are these projects all going to be fulfilled? How long can Meta's long-game metaverse visions be sustained?

Prototype glasses-sized VR headset

Meta's prototype VR sunglasses, the "North Star" for what the tech aims to become.

Scott Stein/CNET

Abrash talks to us once more as we gather for a moment before the day's end, bringing back a connecting theme, that immersive computing will be a true revolution, eventually. Earlier on, we had stopped at a wall full of VR and AR headsets, a trophy case of all the experimental prototypes Meta has worked on. We saw mixed reality ones, ones with displays designed to show eyes on the outside, and ones so small they're meant to be the dream VR equivalent of sunglasses. 

It made me think of the long road of phone design experimentation before smartphones became mainstream. Clearly, the metaverse future is still a work in progress. While big things may be happening now, the true "smartphones" of the AR and VR future might not be around for a long while to come.

"The thing I'm very sure of is, if we go out 20 years, this will be how we're interacting," Abrash said in front of the headset wall. "It's going to be something that does things in ways we could never do before. The real problem with it is, it's very, very hard to do this."