X

AI Needs to Earn Our Trust Before It Can Deliver on Its Promises

Analysis: Google, Microsoft and others seemingly assume people already trust AI to handle everyday tasks on their behalf.

Lisa Eadicicco Senior Editor
Lisa Eadicicco is a senior editor for CNET covering mobile devices. She has been writing about technology for almost a decade. Prior to joining CNET, Lisa served as a senior tech correspondent at Insider covering Apple and the broader consumer tech industry. She was also previously a tech columnist for Time Magazine and got her start as a staff writer for Laptop Mag and Tom's Guide.
Expertise Apple | Samsung | Google | Smartphones | Smartwatches | Wearables | Fitness trackers
Lisa Eadicicco
8 min read
Atmospheric double exposure showing a transparent hand hovering over a computer keyboard.

AI is a new frontier for tech companies, and rivals are looking to stake out territory.

Getty Images

Artificial intelligence is going to revamp the way we work, communicate and generally use our phones and computers, if GoogleMicrosoft and OpenAI are to be believed.

All three tech giants held events in recent weeks that were rife with demos illustrating AI's role in shaping our interactions with the internet. But bringing these visions to fruition requires trust from the general public. 

The tech industry hasn't had such a great track record in that department over the past several years, as evidenced by episodes like Meta's Cambridge Analytica scandal and Google's location-tracking controversy. Such instances have raised questions and concerns about the prevalence of Big Tech, its reach into our lives and the way these companies handle consumers' information. 

AI Atlas art badge tag

There's already some evidence to suggest people are skeptical about AI's growing foothold in our lives. A Pew Research study indicates that concern outweighs excitement when it comes to the increased use of artificial intelligence for 52% of Americans. A report from Bentley University and advisory firm Gallup indicates that 79% of Americans don't trust businesses to use AI responsibly

Earning and maintaining trust has to be essential to companies like Google, Microsoft and OpenAI among others. How people embrace these new AI tools could determine the winners and losers of the next major shift in computing.  

"All of this progress could get diminished if you're not able to sufficiently mitigate the risks and make AI trustworthy for human beings," said Arun Chandrasekaran, an analyst at Gartner focusing on artificial intelligence and cloud computing.

Read more: Google Finally Feels Like a Search Company Again

What's new from Google, Microsoft and OpenAI

Google, Microsoft and OpenAI all showcased significant developments in their AI systems recently, underscoring just how quickly the tech is evolving. 

Google's Gemini assistant was the star of its Google I/O conference. Gemini is getting more conversational with a new mode called Gemini Live, a more conversational version of its voice assistant for those who subscribe to Gemini Advanced. It'll also be able to plan custom travel itineraries for you and answer questions that are more specific to what you're doing on your phone. Google's Gemini Nano model can listen to phone calls and monitor them for potential fraud in real time using on-device processing. 

But Alphabet and Google CEO Sundar Pichai showed what the next iteration of AI helpers could look like. He provided an example of how an AI agent would one day be able to return a pair of shoes on your behalf by finding the order number in your email, filling out the return form and scheduling a UPS pickup. It's an innocuous example, but one that hints at a future in which AI agents are doing far more than just answering questions.

gemini AI agents on google screen

Google CEO Sundar Pichai talked about the company's vision for AI agents at Google I/O. 

Screenshot/James Martin/CNET

On May 20, just before its Build conference, Microsoft announced a new class of AI-driven computers called Copilot Plus PCs. These computers are built to a certain spec to handle processing AI algorithms on-device without relying on the cloud. 

One example of a new feature that's possible on these PCs is Recall, which snapshots your PC's desktop so you can revisit what you've done if needed. The feature is being positioned as an easier way to find files, apps and websites you've recently browsed without having to dig through content manually, and the processing happens on-device. You can prevent certain apps and websites from being logged, and Microsoft says you're "always in control."

OpenAI held its own product launch event on May 13 to introduce its latest flagship model, GPT-4o. Among the highlights was ChatGPT's human-like voice, which many likened to actress Scarlett Johansson. OpenAI eventually pulled the voice from ChatGPT over the similarities, saying the resemblance wasn't intentional. In response, NPR reported, Johansson voiced her concern and pursued legal action against OpenAI. OpenAI also showed how the chatbot can help with math and coding problems and interpret video in addition to speech. 

openai-event-recording-13-may-2024-09-56-17-am-00-01-00-20-still006

OpenAI announced GPT-4o during an event on May 13. 

OpenAI

The common thread between these developments is that they're more proactive and natural than the tech tools we've been using for years. Features like Recall and Google's real-time spam detection aim to anticipate our needs, while OpenAI's upgraded version of ChatGPT and Google's Gemini helper point to a future in which AI bots feel more like friendly coworkers or personal shoppers than question-and-answer machines. 

How much do you trust AI?

The idea of trusting AI to return a purchase, monitor a phone call or help with your homework may sound like a stretch. After all, most people don't even like interacting with automated customer service menus, as a survey from The Conversation notes, which found that people typically try to bypass automated agents to speak with an actual human. 

Newer chatbots like ChatGPT and Gemini are much more advanced than the automated phone menus we're used to. But these new tools still assume that people are willing to trust technology to get things done on their behalf. 

Mohsen Bayati, professor of operations, information and technology at the Stanford Graduate School of Business, believes people will be willing to embrace these tools as long as they have some sort of a say, such as approving a transaction before it's finalized. 

"I would be very worried about those applications that are after full automation," he said, saying that being able to have the user be "the last arbiter in that loop" would likely reduce a lot of reliability concerns. 

That's an important point because as sophisticated as these AI systems are, they're not without flaws. They're prone to hallucinations and other issues that can result in unreliable answers.

Issues with Google's AI Overviews

Google's Gemini was came under scrutiny earlier this year for producing images that were historically inaccurate. Then in May, its new AI-generated summaries that show up above search results, called AI Overviews, was criticized for surfacing false answers, as users on social media pointed out. In one particularly high-profile example, Google's AI Overviews suggested putting glue on pizza to prevent the cheese from sliding off. 

A screenshot of an answer from Google's AI Overview

Google AI Overviews feature provides AI-written snapshots meant to answer your question quickly and conversationally. But they're not aways correct. 

Screenshot by Lisa Eadicicco/CNET

In a statement provided to CNET, a Google spokesperson said the examples the company has seen are "generally very uncommon queries" and "aren't representative of most people's experiences." It added that the "vast majority of AI Overviews provide high quality information."

"We conducted extensive testing before launching this new experience, and will use these isolated examples as we continue to refine our systems overall." the statement said. 

A week later, on May 30, Liz Reid, head of Google Search, published a blog post addressing outlandish answers in AI Overviews like the one mentioned above. She noted that since AI Overviews integrates with the company's core web rankings system and can identify "relevant, high-quality" content from Google's index, AI Overviews usually don't hallucinate the way other large language model-based tools would. 

She also said part of the reason why some AI Overviews advised people to put glue on their pizza or eat rocks was that there isn't much web content out there for answering these uncommon questions, meaning AI Overviews pulled from "satirical" or "sarcastic" content. In other cases, Reid also said some AI Overviews screenshots were faked. There were also a "small number of cases" in which AI Overviews misinterpreted language and presented inaccurate results, she said in the blog post. 

As for what Google is doing, Reid says the company has improved the way it detects nonsensical queries and is limiting the use of user-generated content in ways that could offer misleading advice, among making other changes. 

The use of AI in search results and chatbots has also raised new questions about whether these tools are sourcing information fairly and appropriately. Multiple newspapers and digital news outlets, including The New York Times, have filed lawsuits against OpenAI and its partner Microsoft, for using copyrighted material for training purposes. 

The matter of AI safety

OpenAI also recently disbanded its superalignment team, as Wired and other media outlets reported, and is instead integrating that work into other parts of the company. The superalignment team, which OpenAI announced last year, was created to ensure that  super advanced AI systems are developed safely. 

OpenAI CEO Sam Altman and president Greg Brockman posted a note on X explaining their approach to safety and the company's overall strategy on X in response to the changes. "We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions," the note read. 

Google and Microsoft have also been vocal about their safety efforts and approach to building AI products responsibly. Google reinforced its AI principles during its I/O conference, which include testing for safety, being accountable to people, avoiding creating or reinforcing unfair bias and being socially beneficial among others. Microsoft's Responsible AI page lays out similar values, such as fairness, inclusiveness, transparency, reliability and safety. 

But oftentimes, larger societal issues stemming from technological advancements -- such as social media's impact on mental health, especially in young people -- only become apparent in hindsight. 

"Until we see events that bring the public attention to the kind of data we're sharing, and how those companies are using our data, then people will pay attention," said Hanan Hibshi, assistant teaching professor at Carnegie Mellon University's Information Networking Institute. 

microsoft-event-5-20-24-20-may-2024-02-00-39-pm-00-08-21-09-still001

Microsoft made a big push to integrate more AI into PCs during its Build conference. 

Microsoft

Getting this right is critical for tech companies, considering AI is being sold as a major shift in personal computing. For the last 10 years, the tech industry has been in search of its next iPhone moment, or the next major breakthrough that will change our relationship with technology like the smartphone did. Over the past decade, different ideas about what that could look like have emerged, from virtual reality headsets to smart speakers and smartwatches.

But the overwhelming popularity of ChatGPT following its late 2022 release was a breakthrough moment. New technologies can take a while to catch on, just as the smartwatch did, but it wasn't long before generative AI made its way into everything from smartphones to PCs. 

Whether we're ready to put our trust in AI may still be up for debate. But eventually, as AI becomes more critical to daily tasks, those concerns may take a backseat to the benefits. Robert Seamans, a professor at New York University's Stern School of Business, likens it to entering credit card information online. While people may have been hesitant to do so roughly 25 years ago, many embrace it as the norm today and are willing to accept some level of risk.

"We will maybe imagine ways that the technology might go wrong, or that our interaction with it might go wrong," he said. "But I think once more and more people start using it, [and] once we ourselves start using it, I think very quickly those fears will disappear."

AI or Not AI: Can You Spot the Real Photos?

See all photos

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you're reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.