Hallucinations
What is it?
Hallucinations occur when generative AI invents information that sounds plausible but is not true. This can include factual errors, fabricated sources, false names, or events that never happened. The problem is particularly widespread in language models, which are trained to predict the next word in a sentence — not to distinguish truth from falsehood.
Although many AI developers claim that their latest models “hallucinate less,” experience shows that improvements are very limited. Hallucinations are not a technical bug that can simply be fixed — they are a fundamental feature of how generative AI works. This makes hallucinations a major issue, especially when users blindly trust the content produced by AI.
Examples:
A user asks a language model to write a short profile of a researcher — and the AI invents an academic article that was never written, or a university position that does not exist. Or someone asks the AI to find a ruling from a Danish court case, and it produces a document with seemingly correct legal language — but the entire document is fabricated.
In the United States, several lawyers have been criticized or sanctioned for submitting court documents that were partially written by AI — and filled with fake references. In Denmark, media outlets and public authorities have tested AI as an assistant, but always with a requirement for human oversight, because the risk of error is too high.
What to consider?
Never use AI-generated content uncritically — especially not for important or sensitive purposes. Always read through the texts carefully, and check facts, names, and sources before publishing anything. AI is an excellent tool for inspiration and drafting, but it requires human critical review.
Carefully consider whether you want an AI chatbot to answer questions from users — particularly in fields like healthcare, finance, law, or other areas where misinformation could have serious consequences. If you use AI for search, never take the answers at face value — treat them as suggestions that must be verified.
In short: hallucinations are not going away, and it is naive to believe that the next model version will solve the problem. Instead, we must learn to use AI with critical distance and editorial responsibility.