Jessica Dolcourt is a passionate content strategist and veteran leader of CNET coverage. As Senior Director of Content Operations, she leads a number of teams, including Thought Leadership, Speed Desk and How-To. Her CNET career began in 2006, testing desktop and mobile software for Download.com and CNET, including the first iPhone and Android apps and operating systems. She continued to review, report on and write a wide range of commentary and analysis on all things phones, with an emphasis on iPhone and Samsung. Jessica was one of the first people in the world to test, review and report on foldable phones and 5G wireless speeds.
Jessica led CNET's How-To section for tips and FAQs in 2019, guiding coverage of topics ranging from personal finance to phones and home. She holds an MA with Distinction from the University of Warwick (UK).
ExpertiseContent strategy, team leadership, audience engagement, iPhone, Samsung, Android, iOS, tips and FAQs.
One of the Nexus 5 smartphone's best new features is the always-listening, touchless control over Google Now, the name by which the platform's personal assistant is known.
By saying the words "OK, Google" when the phone is unlocked, you can launch any of Google Now's actions -- like searching the Web or dialing a number -- without having to touch the screen. Several Motorola phones, like the Moto X, did this prior to the Nexus 5's Android 4.4 KitKat OS.
Voice-activated Google Now is a terrific little convenience that can save time or give you the freedom to go hands-free. It's also another stepping stone for what Google, and other companies working on voice actions, can build out next.
For instance, as long as I'm entirely hands-free, I'd like to be able to use secondary voice commands, or rather, a series of commands, to keep both hands on the wheel, or in a chicken I'm stuffing, or wrangling a squirmy child or pet.
What if, when my cell phone rings, I can vocally instruct Google to answer the phone, and then to turn on speakerphone, so I can keep doing what I'm doing uninterrupted?
Similarly, what if Google Now were able to interpret requests to adjust the phone's volume or brightness, or open the Settings menu and then open another submenu while you decide on your next selection?
There's a tremendous amount that Google's voice actions can do, like call a business you search out by name -- as long as there's only one instance of the shop near you. Otherwise, the search assistant may present you with a list of choices that you won't be able to narrow down until you manage to free a hand.
Likewise, if you rattle off very specific instructions, your Android phone can set a reminder for a certain time, but you'll still need to tap the screen to confirm the reminder. In my voice actions future, you'll be able to daisy-chain voice commands to set the time and approve the reminder, which the software will understand based on the context of the initial request.
In other words, as long as I'm still in the reminders app, Google Now should assume that commands relate to the reminder app, unless I completely switch tacks and request something else ("OK, Google. How long will it take to drive to Schenectady?")
I imagine a Google Now that can juggle a handful of commands as adeptly as a human who hears step-by-step dictation: "OK, Google. Search for "best restaurants in San Francisco. OK, Google: Scroll down. OK, Google, pick the menu for Boulevard." And so on.
Even if you do have access to your digits while using the phone, it would be great to have options to intersperse voice actions with typing, which I already do now when dictating short messages or notes.
Say you've just taken a photo or batch of photos you'd like to immediately send to a contact. I envision an even more intelligent assistant savvy enough to execute the command "OK, Google. Send these photos to Jason." after selecting them in the gallery. It would also help, of course, to be able to vocally launch Google Voice Actions from the photo gallery app.
Watch this: What is Google Now?
What I'm proposing would absolutely require a far deeper level of integration with the operating system's many menus, submenus, and apps. Yet it's a direction I think we're headed in, and one in which Google (and Apple, and Nuance, and others) are very capable of achieving.
I, probably like some of you, have in the past been skeptical about speaking commands into my phone, at least in public areas. Yet the practice is already becoming more commonplace (at least here in Silicon Valley).
As the architects of voice commands tap into deeper and deeper corners of our electronics, we will come to rely on using a complex chain of commands -- both on the phone and, surely, in other electronic devices around the home.