What is Hallucinations (AI)?

Hallucinations (AI) are outputs where a model produces statements that are incorrect, unsupported, or not grounded in reliable sources. Hallucinations (AI) can occur even when the output is fluent and confident.

Quick definition

Hallucinations (AI) are believable-sounding AI answers that are not true or not supported.

How Hallucinations (AI) works

  • Hallucinations (AI) can occur when a model generates text without reliable grounding.
  • Hallucinations (AI) can be triggered by ambiguous prompts, missing context, or conflicting training signals.
  • Hallucinations (AI) can be reduced by retrieval-augmented generation (RAG), better prompting, and stronger verification steps.
  • Hallucinations (AI) can be harder to detect when source attribution (AI) is not available.

Why Hallucinations (AI) matters

Hallucinations (AI) matters because inaccurate outputs can mislead users and misrepresent entities.

Hallucinations (AI) affects:

  • trust signals in AI systems and their outputs
  • brand accuracy when a system describes products or policies incorrectly
  • the value of citations, because citations enable verification

Example use cases

  • An LLM answer that cites a correct source but makes an unsupported additional claim.
  • An AI overview that summarizes outdated or incorrect information.
  • A monitoring workflow that flags answer changes that contradict a known definition.

Related terms