The Google Assistant just picked up some new reading skills. Google on Tuesday announced new technology for its digital helper software that lets it read long-form text out loud. For now, the feature is meant mainly for listening to articles, blog posts and short stories on the web.
Google said the technology is different from other screen-reading software because it's meant to read stuff in a natural-sounding voice and cadence, so people won't have trouble listening to the audio for longer periods of time. That includes following grammar and pauses, as if you were listening to an audiobook.
To use the feature, pull up an article or blog post on an Android phone and say either "Hey Google, read it," or "Hey Google, read this page." The text can be read aloud and translated into 42 languages, including Hindi and Spanish.
Google made the announcement at crafted a marketing blitz for the world's biggest tech conference.in Las Vegas, where the search giant has again
The search giant is positioning the announcement as more of a new capability than a one-off feature. The company said the technology could have implications for dictation and publishing. It could also improve accessibility features. Google described the announcement as a "preview" but didn't say when it'd be officially released. The company said it's experimenting with including features like auto-scroll text highlighting, so people can follow along as the words are being read aloud.
Still, the tool has its pitfalls. It can read articles well because they tend to have a linear structure. If you asked the Assistant to read a random webpage, it might read back the text in an incoherent jumble of phrases, Google said.
The search giant has made big investments in making the Assistant sound less robotic. Two years ago, Google introduced Duplex, a technology that uses eerily human-sounding artificial intelligence software to book restaurant reservations and hair appointments. The AI is patterned after human speech, using verbal tics like "uh" and "um." It speaks with the cadence of a real person, pausing before responding and elongating certain words as though it's buying time to think.
The tech brought to life a vision of what a voice assistant could sound like in the future: natural and lifelike, instead of the semirobitic, disembodied voice you hear coming from a Google Home or Amazon Echo today. The demo immediately raised flags for AI experts, industry watchers and consumers, who worried about the ethics of creating robots that could fool people into thinking they were talking to other humans.
For Google, it's crucial to introduce new tricks that separate the Assistant from Alexa, which became a household name after Amazon released it in 2014. (Google followed suit with the Assistant two years later.) Last year at CES, the company announced a new interpreter mode that could translate conversations in real time, a feature that leans into Google's formidable machine learning and engineering chops. Earlier this month, Google brought the feature to smartphones. None of the competing digital assistants have features that are as ambitious.
When it comes to smart speakers, which are a key to hooking consumers into buying smart home products, Google is playing catch up. While the search giant had slowly been closing the gap behind Amazon in recent years, Google's smart speaker growth bombed in 2019. Google's share dropped from about 30% to 12%, while Amazon has grown from about 32% to 37%, according to Canalys. The difference was massive gains from the Chinese companies Alibaba and Baidu, which both jumped ahead of Google.
Still, Google has made progress with its digital helper. Google on Tuesday also announced that 500 million people use the Assistant every month -- the first time Google has revealed user figures for the product. It has about half the number of users as a handful of other Google products, including Maps, Drive and Chrome.