X

Apple Intelligence Is the Future: So Why Isn't It on Apple's Most Futuristic Product Yet?

Later this year, you'll find generative AI features on Apple devices but it isn't on the Vision Pro… yet.

Scott Stein Editor at Large
I started with CNET reviewing laptops in 2009. Now I explore wearable tech, VR/AR, tablets, gaming and future/emerging trends in our changing world. Other obsessions include magic, immersive theater, puzzles, board games, cooking, improv and the New York Jets. My background includes an MFA in theater which I apply to thinking about immersive experiences of the future.
Expertise VR and AR | Gaming | Metaverse technologies | Wearable tech | Tablets Credentials
  • Nearly 20 years writing about tech, and over a decade reviewing wearable tech, VR, and AR products and apps
Scott Stein
4 min read
Apple Vision Pro floating in the air against a purple background with a faint apple logo

The Vision Pro, for the moment, is left out of the Apple Intelligence loop.

Numi Prasarn/Viva Tung/CNET

Apple's latest developer conference, WWDC 2024, was half full of the future of advanced AI services coming this fall to Macs, iPads and iPhones. Notably absent from the mix was the Apple product unveiled a year ago at its last developer conference. 

Although the Apple Vision Pro is technically only four months old, it's been a full year since we've been aware of it. Apple considers the Vision Pro the future of computing, and as a fully self-contained device with a robust Apple M2 chip, it certainly seems like a possible successor to our current world of phones, laptops and tablets.

And yet, the Vision Pro isn't getting generative AI capabilities this year. Apple Intelligence -- Apple's hardware- and cloud-powered set of AI services -- will work on A17 Pro chips in iPhone 15 Pro models plus M-series iPads and Macs. The Apple Vision Pro, which again has an M2 chip, should be one of the devices to work with Apple Intelligence. It isn't yet, though. VisionOS 2 has a number of smaller upgrades, but generative AI -- a feature I thought would be a key addition for Vision Pro -- isn't one of them.

Watch this: Apple Intelligence: What to Know About Apple's Gen AI

According to Apple, more platforms will get Apple Intelligence in the future. The Vision Pro and Apple Watch, the two most conspicuous absences from the list right now, could be next. I'm particularly disappointed that the Vision Pro isn't one of the first devices to get the upgrade because the advanced headset is an early-adopter product for testing new ideas in mixed reality. It should be an experimental AI device, too.

But maybe the Vision Pro's complexity and limited footprint are what makes it a next-wave device for getting AI services onboard. 

Apple is only now getting to making the Vision Pro available outside of US markets, with eight more countries being added to the list in June and July. Maybe it's that limited availability, and its comparatively small sales numbers, that make it a product that can wait on getting Apple Intelligence.

Or maybe it's that the Vision Pro's different types of inputs -- hand tracking, eye tracking and an array of inner and outer cameras -- make for a different level of challenges for figuring out useful AI. A smarter Siri sounds like it would be a huge help for the Vision Pro, because I'm already using Siri more there than I am with my iPad or Mac. I open apps with my voice, enter text with my voice and search with my voice all the time. It's faster than trying to use my eyes and hands.

The complexity of Vision Pro could also be putting a different load on the processors that run AI. The neural engine on the Vision Pro's chips also helps process constant room scanning and eye and hand tracking inputs and overlaying virtual graphics onto overlays of live camera feeds. There's a lot going on at the same time with the Vision Pro, and third-party apps don't even have general camera access in-headset yet.

To me, the more interesting future of generative AI is multimodal: using cameras and microphones to be aware of real-time feeds of what I see and say. Early wearable AI devices like Meta's Ray-Ban glasses and the Humane AI Pin can "see" the world with their cameras, but only by taking still snapshots and then quickly analyzing them. Getting descriptions of my world, or even advice, is fascinating. But right now, it's rough around the edges.

Apple also needs to unleash camera access on the camera-studded Vision Pro headset. Third-party apps on Vision Pro still can't use these cameras to truly see the world around you unless they're built using Apple's new enterprise-focused API. That level of limited access might suggest that Apple's aiming to manage the load on the Vision Pro's processing, and on-tap generative AI would be another layer of complexity.

Do these complexities mean a next-gen Vision headset with a more advanced processor -- which could arrive, based on reports, in late 2025 -- could be the real recipient of fully fledged Apple Intelligence? It's all guessing now, but the current Vision Pro seems like it should still be more than powerful enough to run generative AI.

Inevitably, Apple will open up those Vision Pro camera permissions more. So will other VR/AR headset manufacturers, like Meta. That might be when generative AI truly becomes transformative for mixed reality, but I still see great uses for it in the meantime. 

A better Siri would be huge, and so would generating creative content or coding. Meta's Andrea Bosworth sees potential uses for generative AI on Quest headsets in the near future. Apple should move on the Vision Pro soon, too. If the future flows through the Vision Pro, it's going to need access to the future of how all of Apple's software services work, too.