X

Google Pumps Brakes on AI Overviews Search After Telling Us to Eat Glue, Rocks

The internet giant's public experiment with AI-generated summaries for search results has had a rough start.

Ian Sherr Contributor and Former Editor at Large / News
Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he's always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer -- the kind with swords -- and began woodworking during the pandemic.
Ian Sherr
3 min read
Google I/O on-stage screen with the words "Search in the Gemini era"

At its I/O developer conference earlier this month, Google talked up AI Overviews, as well as its Gemini AI systems writ large.

Google

Google says it will scale back the use of AI-written summaries for some search results, after its systems told some users to add glue to their pizza and how many rocks to eat per day, and repeated to others a racist conspiracy theory about former President Barack Obama.

The internet giant said in a statement late Thursday that it will limit some responses from its AI Overviews service, particularly in instances where it detects users are asking "nonsensical" or satirical questions. The company will also pause answers on health-related topics.

AI Atlas art badge tag

While defending the overall accuracy of its AI-related search results, the company acknowledged that "some odd, inaccurate or unhelpful AI Overviews certainly did show up."

"At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors," Google search head Liz Reid wrote, adding that more than a dozen changes have since been made to the systems in response. "As is always the case when we make improvements to Search, we don't simply 'fix' queries one by one, but we work on updates that can help broad sets of queries, including new ones that we haven't seen yet."

Read more: Google's AI Overviews: What It Is and Why It's Getting Things Wrong

Google's public announcement that it's reducing its AI Overviews service marks the latest in a growing list of troubled product launches from tech's biggest names as they scramble to take the lead in the AI boom. 

Other giants, including Facebook parent Meta, Microsoft and X, have rushed to add AI features to some of their most widely used products, with decidedly mixed results. Some of the most high-profile embarrassments have included AIs that profess their love for users or create summaries of real-time events that didn't happen and image generators that produce inaccurate depictions of people throughout history. Still, companies and governments are continuing to embrace AI as a necessary tool for the future. (For hands-on CNET reviews of generative AI products including Gemini, Claude, ChatGPT and Microsoft Copilot, along with AI news, tips and explainers, see our AI Atlas resource page.)

Read more: AI Atlas, Your Guide to Today's Artificial Intelligence

AI Overviews had been in testing for a number of months before Google announced at its I/O developer conference in mid-May that the company would immediately begin rolling out the service much more broadly. At I/O, Google spent the whole of its two-hour keynote presentation talking about its AI efforts, centered on its Gemini model.

Google's problem with AI Overviews as part of its primary search service may go much deeper. AI technologies often struggle with "hallucinations," inventing facts that aren't actually true in an effort to provide a coherent response. At the same time, billions of people use Google's search service to help find information every day. Tech companies, including Google, continue to suggest they can overcome these issues, though many have added a disclaimer to their tools warning users that the information AI provides may not be true.

In its Thursday evening blog post, Google listed several ways it's refining or cutting back on what AI Overviews shares in search results:

  • It's using "better detection mechanisms" for nonsensical queries and limiting the inclusion of satire and humor content.
  • It's limiting the use of user-generated content in "responses that could offer misleading advice."
  • It's added restrictions for queries "where AI Overviews were not proving to be as helpful."
  • It's striving to not show AI Overviews for hard-news topics, "where freshness and factuality are important" and adding refinements for health-related topics "to enhance our quality protections."

"We know that people trust Google Search to provide accurate information," Google's Reid added in her statement Thursday. "We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously."

Watch this: Everything Google Just Announced at I/O 2024

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you're reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.