Understanding AI Hallucinations

Understanding AI Hallucinations

Lately, it seems like artificial intelligence can do virtually anything a person can, from writing content and searching the web to understanding human speech and scheduling appointments. These advanced abilities beg the question, can AI mimic negative human traits? For example, can it lie? The answer is yes — and no. While AI lacks any sense of morality or ethics, it can lead us astray with AI hallucinations.

What are AI hallucinations?

Sometimes, large language models (LLMs) generate incorrect data. This phenomenon is known as an “AI hallucination,” and it’s something to be wary of when using AI chatbots or relying on AI to create content. In fact, most LLMs include a disclaimer that the information provided may not be accurate.

Google Bard's prompt box is accompanied by a disclaimer that it may produce inaccurate information.

AI hallucinations include nonsensical and illogical responses as well as factually incorrect data (hence the previous mention of AI “lying”). But, AI isn’t truly lying, or even hallucinating.

What causes AI hallucinations?

AI isn’t human. It can’t choose to deceive us or have a mental breakdown. In fact, generative AI doesn’t use logic at all — at least, not the way humans do. Tools like Bard and ChatGPT replicate language patterns. This means they may produce content that sounds plausible but is completely fabricated.

AI hallucinations happen for several reasons. The AI model may not have been trained thoroughly, or the data it was trained upon may have gaps and mistakes. Hallucinations can also happen when prompts are unclear or illogical. In other cases, encoding errors lead to false data generation.

Sometimes, engineers and data scientists simply don’t know why these hallucinations happen. At their core, AI tools are designed to deliver coherent responses — even if they have to take a leap to achieve this goal. Generative AI pioneers, including ChatGPT creator OpenAI, are actively iterating the technology to reduce (and potentially eliminate) AI hallucinations.

Google Bard’s infamous demo

One of the most known cases of an AI hallucinating happened during Google’s disastrous demo of their AI chatbot, Bard. When asked, “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard responded, “JWST took the very first pictures of a planet outside of our own solar system.”

This is verifiably and undeniably incorrect. In reality, a different telescope captured the first photo of a planet outside our solar system back in 2004.

This AI hallucination was undoubtedly embarrassing (and costly) for Google, but it’s served AI adopters and engineers well in the months since. Now we know AI can get it wrong, and we can take steps to avoid hallucinations in the future.

How to avoid AI hallucinations

While AI hallucinations aren’t 100% avoidable, there are several ways to prevent them from compromising your AI-generated content.

Strategic prompting

AI outputs are only as good as the prompts used to generate them. AI experts recommend a few prompting strategies to avoid AI hallucinations, including:

Limiting outcomes

Asking AI yes or no questions, or giving it a number limit when forming your prompt, can prevent the LLM from trying to fill in the gaps or giving “out there” responses.

Prompting ChatGPT with limited options. "What were the three tallest buildings in America as of 2021?"

Context and examples

Like people, AI responds more accurately with clear context and examples. Give it any and all relevant information. You can also ask it to exclude certain responses. If you have examples, provide those as well.

Prompting ChatGPT with context, examples, and what to avoid. "Provide a short description of Catherine the Great, but don't include information from fictional depictions like Hulu's "The Great." For example, "Catherine the Great was best known for...""

Assign a role to AI

You can also do a bit of role-playing with AI. Assigning a chatbot a “persona” has been shown to reduce AI hallucinations and clarify the type of response you want to receive.

Assigning a role to the AI when prompting ChatGPT. "You are a professional pastry chef. What is the best temperature to bake croissants?"

Ask for honesty

It sounds strange, but AI just wants to do its job. Sometimes that means grasping at straws for a great response. You curb this “over enthusiasm” by instructing AI not to lie, or giving it permission to say “I don’t know.”

Prompting ChatGPT to be honest in its responses or admit if it doesn't know. "Who was the queen of apple pie? Don't lie. If you don't know, say "I don't know."

Fact checking

Your second, and most important, defense against hallucination-riddled content is to thoroughly verify anything generated by AI. Even if you’ve become a skilled AI prompter, it’s always wise to cross-check facts against reputable external sources. If you can’t confirm a claim, play it safe and remove it from your content.

Don’t fear AI hallucinations

The possibility of hallucinations may give you pause, but don’t let that stop you from taking advantage of generative AI. Remember, AI is here to support us, not replace us. And, it’s still in a relatively nascent phase. Like all tech innovations, AI development is going to have some growing pains.

But issues like AI hallucinations don’t negate all the benefits of AI — including faster, more convenient content writing. Just remember to be aware, prompt carefully, and always verify facts before you publish them.

Similar Posts