X

Hallucinations: Why AI Makes Stuff Up, and What's Being Done About It

There's an important distinction between using AI to generate content and to answer questions.

Lisa Lacy Lead AI Writer
Lisa joined CNET after more than 20 years as a reporter and editor. Career highlights include a 2020 story about problematic brand mascots, which preceded historic name changes, and going viral in 2021 after daring to ask, "Why are cans of cranberry sauce labeled upside-down?" She has interviewed celebrities like Serena Williams, Brian Cox and Tracee Ellis Ross. Anna Kendrick said her name sounds like a character from Beverly Hills, 90210. Rick Astley asked if she knew what Rickrolling was. She lives outside Atlanta with her son, two golden retrievers and two cats.
Expertise Technology, AI, Advertising, Retail
Lisa Lacy
8 min read
A face with a Pinocchio-like nose tries to hide behind a mask.
wildpixel/iStock via Getty Images

Less than two years ago, cognitive and computer scientist Douglas Hofstadter demonstrated how easy it was to make AI hallucinate when he asked a nonsensical question and OpenAI's GPT-3 replied, "The Golden Gate Bridge was transported for the second time across Egypt in October of 2016."

Now, however, GPT-3.5 — which powers the free version of ChatGPT — tells you, "There is no record or historical event indicating that the Golden Gate Bridge, which is located in San Francisco, California, USA, was ever transported across Egypt."  

It's a good example of how quickly these AI models evolve. But for all the improvements on this front, you still need to be on guard.

AI chatbots continue to hallucinate and present material that isn't real, even if the errors are less glaringly obvious. And the chatbots confidently deliver this information as fact, which has already generated plenty of challenges for tech companies and headlines for media outlets.

Taking a more nuanced view, hallucinations are actually both a feature and a bug — and there's an important distinction between using an AI model as a content generator and tapping into it to answer questions.

Since late 2022, we've seen the introduction of generative AI tools like ChatGPT, Copilot and Gemini from tech giants and startups alike. As users experiment with these tools to write code, essays and poetry, perfect their resumes, create meal and workout plans and generate never-before-seen images and videos, we continue to see mistakes, like inaccuracies in historical image generation. It's a good reminder generative AI is still very much a work in progress, even as companies like Google and Adobe showcase tools that can generate games and music to demonstrate where the technology is headed. 

If you're trying to wrap your head around what hallucinations are and why they happen, this explainer is for you. Here's what you need to know.

What is an AI hallucination?

A generative AI model "hallucinates" when it delivers false or misleading information.

A frequently cited example comes from February 2023 when Google's Bard chatbot (now called Gemini) was asked about the discoveries made by NASA's James Webb Space Telescope and it incorrectly stated the telescope took the first pictures of an exoplanet outside our solar system. But there are plenty of others.

ChatGPT falsely stated an Australian politician was one of the guilty parties in a bribery case when he was in fact the whistleblower. And during a two-hour conversation, Bing's chatbot eventually professed its love for New York Times tech columnist Kevin Roose.

According to Stefano Soatto, vice president and distinguished scientist at Amazon Web Services, a hallucination in AI is "synthetically generated data," or "fake data that is statistically indistinguishable from actual factually correct data." (Amazon Web Services works with clients like LexisNexis and Ricoh to build generative AI applications with Anthropic's Claude 3 Haiku model.)

Let's unpack that a little. Take, for example, an AI model that can generate text and was trained on Wikipedia. Its purpose is to generate text that looks and sounds like the posts we already see on Wikipedia.

In other words, the model is trained to generate data that is "statistically indistinguishable" from the training data, or that has the same type of generic characteristics. There's no requirement for it to be "true," Soatto said.

How and why does AI hallucinate?

It all goes back to how the models were trained.

The large language models that underpin generative AI tools are trained on massive amounts of data, like articles, books, code and social media posts. They're very good at generating text that's similar to whatever they saw during training.

Let's say the model has never seen a sentence with the word "crimson" in it. It can nevertheless infer this word is used in similar contexts to the word "red." And so it might eventually say something is crimson in color rather than red.

"It generalizes or makes an inference based on what it knows about language, what it knows about the occurrence of words in different contexts," said Swabha Swayamdipta, assistant professor of computer science at the USC Viterbi School of Engineering and leader of the Data, Interpretability, Languageand Learning(DILL) lab. "This is why these language models produce facts which kind of seem plausible but are not quite true because they're not trained to just produce exactly what they have seen before."

Watch this: Expert vs. AI: Is Now the Time to Buy an EV?

Hallucinations can also result from improper training and/or biased or insufficient data, which leave the model unprepared to answer certain questions.

"The model doesn't have contextual information," said Tarun Chopra, vice president of product management at IBM Data & AI. "It's just saying, 'Based on this word, I think that the right probability is this next word.' That's what it is. Just math in the basic sense."

How often does AI hallucinate?

Estimates from gen AI startup Vectara show chatbots hallucinate anywhere from 3% to 27% of the time. It has a Hallucination Leaderboard on developer platform Github, which keeps a running tab on how often popular chatbots hallucinate when summarizing documents.

Tech companies are well aware of these limitations.

For example, GPT-3.5 warns, "ChatGPT can make mistakes. Consider checking important information," while Google includes a disclaimer that says, "Gemini may display inaccurate info, including about people, so double-check responses."

An OpenAI spokesperson said the company is "continuing to make improvements to limit the issue as we make model updates."

According to OpenAI's figures, GPT-4, which came out in March 2023, is 40% more likely to produce factual responses than its predecessor, GPT-3.5.

In a statement, Google said, "As we've said from the beginning, hallucinations are a known challenge with all LLMs — there are instances where the AI just gets things wrong. This is something that we're constantly working on improving."

When asked about hallucinations in its products, a Microsoft spokesperson said it has "made progress on grounding, fine-tuning and steering techniques to help address when an AI model or AI chatbot fabricates a response."

Can you prevent AI hallucinations?

We can't stop hallucinations, but we can manage them.

One way is to ensure the training data is of a high quality and adequate breadth and the model is tested at various checkpoints.

Swayamdipta suggested a set of journalism-like standards in which outputs generated by language models are verified by third-party sources.

Another solution is to embed the model within a larger system — more software — that checks consistency and factuality and traces attribution.

"Hallucination as a property of an AI model is unavoidable, but as a property of the system that uses the model, it is not only unavoidable, it is very avoidable and manageable," Soatto said.

This larger system could also help businesses make sure their chatbots are aligned with other constraints, policies or regulations — and avoid the lawsuit Air Canada found itself in after its chatbot hallucinated details about the airline's bereavement policy that were inaccurate.

"If users hope to download a pretrained model from the web and just run it and hope that they get factual answers to questions, that is not a wise use of the model because that model is not designed and trained to do that," Soatto added. "But if they use services that place the model inside a bigger system where they can specify or customize their constraints … that system overall should not hallucinate."

A quick check for users is to ask the same question in a slightly different way to see how the model's response compares.

"If someone is a habitual liar, every time they generate a response, it will be a different response," said Sahil Agarwal, CEO of AI security platform Enkrypt AI. "If a slight change in the prompt vastly deviates the response, then the model actually didn't understand what we're asking it in the first place."

Are AI hallucinations always bad?

The beauty of generative AI is its potential for new content, so sometimes hallucinations can actually be welcome.

"We want these models to come up with new scenarios, or maybe new ideas for stories or … to write a sonnet in the style of Donald Trump," Swayamdipta said. "We don't want it to produce exactly what it has seen before."

And so there's an important distinction between using an AI model as a content generator and using it to factually answer questions.

"It's really not fair to ask generative models to not hallucinate because that's what we train them for," Soatto added. "That's their job."

How do you know if an AI is hallucinating?

If you're using generative AI to answer questions, it's wise to do some external fact-checking to verify responses.

It might also be a good idea to lean in to generative AI's creative strengths but use other tools when seeking factual information.

"I might go to a language model if I wanted to rephrase something or help with some kind of writing tasks as opposed to a task that involves correct information generation," Swayamdipta said.

Another option is retrieval augmented generation (RAG). With this feature, the overall system fact-checks sources and delivers responses with a link to said source, which the user can double-check.

OpenAI's GPT-4 has the ability to browse the Internet if it doesn't know the answer to a query — and it will cite where the information came from.

Microsoft also can search the web for relevant content to inform its responses. And Copilot includes links to websites where users can verify responses.

Will we ever get to a point where AI doesn't hallucinate?

Hallucinations are a result of training data limitations and lack of world knowledge, but researchers are working to mitigate them with better training data, improved algorithms and the addition of fact-checking mechanisms.

In the short term, the technology companies behind generative AI tools have added disclaimers about hallucinations. 

Human oversight is another aspect to potentially better manage hallucinations within the scope of factual information. But it also may come down to government policies to ensure guardrails are in place to guide future development.

The EU in March approved the Artificial Intelligence Act, which seeks to foster the development of trustworthy AI with clear requirements and obligations for specific uses.

According to Chopra, the EU AI Act "provides a much tidier framework for ensuring transparency, accountability and human oversight" in developing and deploying AI. "Not every country is going to do the same thing, but the basic principles … are super, super critical," he added.  

Until then, we'll have to use a multi-pronged strategy to take advantage of what these models offer while limiting any risks.

"I think it helps to not expect of machines what even humans cannot do, especially when it comes to interpreting the intent of humans," Soatto said. "It's important for humans to understand [AI models], exploit them for what they can do, mitigate the risks for what they're not designed to do and design systems that manage them."

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.