The Apple Watch's Double-Tap Gesture Is a Lot Better Than Expected and I Want More

Commentary: Once only an accessibility feature, the double-tap finger gesture added to the latest Watches is Apple easing us into a new interface with a lot of potential.

Scott Stein Editor at Large
I started with CNET reviewing laptops in 2009. Now I explore wearable tech, VR/AR, tablets, gaming and future/emerging trends in our changing world. Other obsessions include magic, immersive theater, puzzles, board games, cooking, improv and the New York Jets. My background includes an MFA in theater which I apply to thinking about immersive experiences of the future.
Expertise VR and AR | Gaming | Metaverse technologies | Wearable tech | Tablets Credentials
  • Nearly 20 years writing about tech, and over a decade reviewing wearable tech, VR, and AR products and apps
Scott Stein
5 min read
Apple Watch Series 9 on a wrist as someone double taps

Double tapping doesn't do a ton of things on Apple Watch right now, but it might evolve a lot over time.

For this year's Apple Watches, the company surfaced a fingertip double-tap gesture derived from accessibility features introduced years before. It's had me tapping my fingers a lot. Taking walks. On the train. This was before the feature was even made available. I just found myself doing little tap gestures, imagining what I could suddenly activate.

Apple's double-tap gesture is here now, and I've been trying it for over a week. Sometimes it's fantastic. Sometimes it feels annoyingly limited. But it's made me want more. A lot more. After I'd been dreaming for years of future interfaces on wrists and on AR/VR headsets, this little double tap feels like the tiniest little entry point to a lot more. Almost as if Apple is easing us into a whole new interface language, step by tiny step.

Next year, Apple has a far more ambitious product it's releasing: the Vision Pro, a combination AR/VR headset that folds all of iOS into a mixed-reality interface. It's leaning entirely on eye and hand tracking to control everything, and double-tapping is one of the key gestures it uses to "click" on things. 

Is the Apple Watch double-tap as it currently stands a true doorway into a new gestural interface future? Not now. It's too laggy in the current iteration, and too limited. But it's a move I expect to expand, improve and carry over to other wearables made by other companies, too.

Apple/Gif by Arielle Burton/CNET

A foot in the door on new ideas

Apple is quick to point out and clarify that double-tapping on Apple Watch isn't the same as double-tapping on the Vision Pro. It works via different technologies: the watch uses optical heart rate and accelerometer/gyroscope measurements, while the Vision Pro uses external cameras to sense hand motion. The watch can't even be used to control the Vision Pro -- yet. But it's no accident that these gestures resemble each other. 

Companies like Meta have already outlined a future where wrist trackers and headsets intertwine. In Meta's vision, neural input technologies like electromyography (EMG) will accurately detect hand motions with subtle accuracy. 

In the meantime, there's likely to be a period of a few years where watches evolve gesture awareness that gets "good enough" without adding EMG yet. Much like phones started showing off augmented-reality effects using nothing but cameras and motion detection before depth-sensing technologies like lidar were added in, we're seeing the start of "good enough" gesture tracking and a future where more advanced sensors are added to refine capabilities even further.

Apple already has a more expansive gestural movement set in existing Apple Watches, under Accessibility Settings in a category called Assistive Touch. Those gestures can be used to fully navigate the Watch and activate any onscreen "touch" feature for people who can't activate touch and need another accessible alternative. 

Apple's single double-tap feature is a refinement on new Series 9/Ultra 2 Watches that uses new algorithms to allow it to be always available without too much battery drain. Apple focused on that one feature for this release, but clearly the company could create other gestures next with new algorithms. Fist clenches, or single taps, are part of the other controls in the Accessibility settings, plus a motion-controlled pointer. Additional tapping controls feel like a likely next step, unlocking double-tap to do more than the simple actions it's currently limited to. Third-party apps can't use it yet either, unless it's inside a pop-up notification.

Apple/Gif by Arielle Burton/CNET

It needs to be more frictionless

The whole idea of gestural inputs isn't so much a VR/AR idea as an ambient computing one. It reminds me of Soli, the radar-based gestural sensors that Google experimented with for years, designed to ideally be used at a home or anywhere else without reaching for a touchscreen or a button.

To make this work, it needs to not feel annoying, and I can't know that until I live with it for a time. I tend to ignore new shortcuts and features on my phone, defaulting to the flow I'm already familiar with. On the Apple Watch, I'm still sometimes tapping the "answer" button on the screen instead of remembering I could have just double-tapped. In VR, I've developed some muscle memory for some everyday gestures: Double-tapping on the side of the Quest headset can show the real world via the cameras. And yet I forget to use voice commands in VR.

The best example of double-tap on the Watch right now is when messages pop up. Double-tapping in each phase kicks off a different function: First it enters dictation, and then it sends the message off. It makes the process of a quick response feel so much more fluid. But for timers, for example, I can only stop one, not start one. On watch faces, I can scroll through a few basic pop-up widget panes in the watch's "smart stack," but I can't open any with a double-tap. Siri could do some of these things, but again, how that works together with double-tap isn't fully worked out yet.

Mark Zuckerberg demos a neural input wristband with a computer

Mark Zuckerberg using the EMG wristband in a demo I saw a year ago. Will watch gestures find a pathway to evolve to this level of complexity for AR and VR?


Why this will matter

I think about shortcuts and workflow because I think about Apple's future of spatial computing. The Vision Pro will be a wrap-around headset controlled totally by hand and eye movements. My brief Vision Pro demos showed some pretty basic controls, like my hands were scrolling, pointing and tapping to open and move things. But what if I want to do things faster? Will there be ways to do quick commands, invoke quick moves similar to how I use multifinger gestures on my trackpad, or keyboard shortcuts? Could a watch, which doesn't have to always be in view of the Vision Pro's cameras, be a more reliable way to do these things? Or could the watch's display combine somehow with gestures?

I'm sort of flailing at imagining this future because it hasn't been written yet. VR headsets that currently exist use standard game controller-type inputs, and some hand gestures, and also rely on keyboards and trackpads, too. They're not perfectly next-gen computing devices yet. Apple looks like it's trying to advance the input conversation, while Meta is doing the same on a parallel path. But Apple has its smartwatch gestures now, even if they're not connected to VR and AR yet. 

If the commands become good enough, could they also extend to working on iPads, or Macs, or with TVs, or anywhere else? That's the whole goal of ambient computing. But to this point, no company has figured out that new gestural language in everyday wearable use. In a farther-off future with smart glasses and watches, and neural input sensors, it's going to be essential. In the meantime, we're starting to see the smallest of stepping stones… one tap at a time.