Over the past year, I have watched Britain embrace ChatGPT with astonishing speed. From schoolchildren using it to summarise Shakespeare, to NHS administrators exploring draft letters, to professionals quietly leaning on it for research, AI has slipped into daily life with remarkable ease. Yet alongside this enthusiasm lies a quieter, more troubling reality: ChatGPT often provides false information—sometimes subtly wrong, sometimes wildly fabricated, and often delivered with unshakable confidence.
This phenomenon, popularly known as “hallucination” but better described as fabrication, raises pressing questions for a technologically advanced society that increasingly depends on automated systems. Why does ChatGPT invent information? What risks does this pose to the British public? And—crucially—how should the UK respond?
As a member of the UK academic community, I have spent the past two years evaluating the benefits and risks of generative AI. This article offers a clear-eyed commentary on the issue, aiming to demystify the technology rather than demonise it. My goal is not to frighten but to inform, empowering readers with an understanding that is unfortunately still rare in public discourse.

Before we can understand why ChatGPT “makes things up,” we need to explain what it is supposed to do.
ChatGPT is a large language model (LLM). It predicts the most likely next word in a sentence, based on patterns learned from vast amounts of text. It does not “know” facts. It does not “think.” It does not “check” anything against a database unless externally connected. It is not a search engine, not an encyclopedia, and not a calculator—although it can mimic all three.
Most crucially, ChatGPT has no built-in mechanism to recognise when it is wrong.
And yet, it is designed to produce fluent, authoritative, human-like text.
It is this combination—high confidence + no awareness of error—that makes AI fabrication so potent.
If we asked a human student a question they did not know the answer to, most would say: I’m not sure. ChatGPT, by contrast, is designed to always produce an answer. If the correct answer is uncertain or unavailable, it simply guesses, but expresses this guess with convincing fluency. The machine is not lying—lying requires intent—but it certainly produces statements that look like lies.
The British public deserves a straightforward, non-technical explanation. The causes of ChatGPT’s falsehoods can be grouped into five core factors.
ChatGPT assembles sentences by predicting what text “should” follow, based on patterns seen in its training data. It has no internal model of truth or falsity. Thus, if you ask:
“Who was the Prime Minister of the UK in 1732?”
There was no Prime Minister, but ChatGPT may confidently invent a plausible-sounding answer because its objective is purely linguistic.
Even when correct data exists, ChatGPT may have learned from outdated, inconsistent, or incorrect sources. The internet is full of poorly cited blogs, duplicated errors, and misinformation. LLMs magnify these issues at scale.
AI systems are deliberately designed to avoid disappointing the user. They are rewarded during training for producing helpful, complete, friendly responses. This reinforcement encourages speculation—an unfortunate side effect.
Unless specifically connected (and many models are not, or are partially limited), ChatGPT cannot fetch real-time information, verify facts, or check references. Users often assume it knows everything; in reality, it only knows patterns.
Human reviewers play a role in shaping ChatGPT’s behaviour. Their expectations—about what a “good” answer looks like—can inadvertently teach the system to favour confidence over caution.
Together, these factors create a technology that is extraordinarily helpful but fundamentally prone to error.
ChatGPT’s errors are not a trivial annoyance; they pose real risks for a modern society. Several sectors in the UK are already encountering systemic problems.
Schools and universities across the UK are experiencing a wave of student work that contains invented citations, fabricated quotes, and misrepresented scientific studies. Many students mistakenly believe that if ChatGPT writes it, it must be correct.
Teachers report essays containing references to academic papers that do not exist, or misattributed quotes that resemble the real thing closely enough to feel convincing. The danger here is not plagiarism—it is epistemic erosion.
Students must learn how to know what is true. Generative AI makes that significantly harder.
Newsrooms are increasingly turning to AI for draft writing, summarisation, and background information. Yet a single fabricated detail can undermine the credibility of an entire article. In the worst cases, it can misinform the public.
British journalism already grapples with declining trust. AI errors, if left unchecked, risk deepening that crisis.
Although NHS clinicians use AI cautiously, patients often turn to ChatGPT for medical advice. The model may provide incorrect dosages, outdated guidance, or fabricated clinical evidence.
Even small inaccuracies can lead to real harm when individuals rely on AI instead of professional medical advice.
In the United States, lawyers have already submitted court documents containing fabricated case citations produced by AI. It is only a matter of time before similar incidents appear in the UK.
Local councils, civil servants, and parliamentary staff increasingly use generative tools for drafting summaries and correspondence. Without adequate safeguards, AI fabrication can distort official communication or introduce errors into public records.
Britain’s SMEs—especially sole traders—are among the fastest adopters of AI tools. While AI can increase productivity, it may also generate false claims in marketing materials, incorrect tax explanations, or misleading legal advice.
A misplaced belief in AI authority exposes businesses to legal liability.
One of the most surprising findings in recent behavioural research is that humans often trust AI more readily than they trust other people—especially when the AI speaks with confidence.
Three psychological mechanisms contribute:
Humans assume that computer-generated information must be correct because it feels objective and mathematical.
Because ChatGPT can summarise complex topics instantly, people assume its knowledge is deeper than it actually is.
Humans mistake linguistic fluency for competence. ChatGPT’s error is not in sounding human. Our error is in believing that sounding human means being right.
These psychological biases present a challenge for public policy: even if people know that AI can be wrong, they may trust it anyway.
Below are anonymised, representative examples of failures observed across UK institutions. Each demonstrates a different category of risk.
A postgraduate student submitted a literature review citing eight peer-reviewed articles that “proved” a particular cognitive phenomenon. None of the eight papers existed. ChatGPT had fabricated them in the correct academic style.
The student was unaware of the fabrication.
A trainee solicitor used AI to draft a summary of a legal argument. The AI cited a case that appeared plausible and stylistically appropriate. When supervisors attempted to locate it, the case was nowhere to be found.
A single fabricated precedent can undermine an entire legal argument.
A patient asked ChatGPT about interactions between two medications. The AI confidently claimed there were no known risks, when in fact the NHS advises caution.
The patient relied on the AI—instead of phoning their GP.
A journalist asked ChatGPT for details about a minor 19th-century parliamentary debate. ChatGPT produced a fully coherent but entirely fabricated account, including quotes attributed to MPs and references to non-existent issues.
The journalist nearly published the information.
A self-employed tradesperson asked ChatGPT whether a certain tool purchase was deductible. The AI provided a confidently incorrect interpretation of HMRC guidance.
Incorrect tax advice can have serious legal and financial consequences.
In each case, the fabricated information was plausible enough to deceive a non-expert—and sometimes even experts.
Tech companies often describe these errors as hallucinations. This term is problematic for several reasons:
It anthropomorphises the machine, implying that it “sees visions” or “believes” things.
It minimises the seriousness of the errors—it sounds whimsical rather than dangerous.
It obscures the actual cause: statistical prediction, not psychological disturbance.
A more accurate term would be:
fabrication,
invention,
auto-generated error, or
model guesswork.
The UK public deserves terminology that clarifies, not romanticises, the issue.
Britain stands at a crossroads. We cannot suppress AI—nor should we. Generative tools bring enormous productivity benefits and expand access to information. But we must confront the reality of AI fabrication with seriousness and foresight.
Below is a pragmatic, actionable roadmap.
Every citizen—not just students—needs fundamental training in how AI works and where it fails. Digital literacy should include:
understanding that AI can fabricate information
knowing how to verify facts independently
recognising the limits of machine intelligence
learning responsible prompts and verification habits
This must become a national priority, not an optional skill.
When AI is used in drafting documents in government, the NHS, courts, or schools, there must be:
transparent disclosure,
human oversight, and
documented verification mechanisms.
AI should assist professionals, not replace their judgment.
News organisations should implement rules requiring journalists to verify every AI-generated fact, quote, date, and reference. The temptation to rely on AI for background research must be met with rigorous editorial policies.
UK schools and universities need policies that:
distinguish appropriate vs. inappropriate use
require explicit fact-checking when AI is used
teach students how to identify fabricated citations
reinforce the value of primary sources
The goal is not to ban AI, but to build a culture of critical thinking.
Britain has the academic talent to lead global research on:
AI truthfulness
machine verification
automated fact-checking
safe deployment in high-risk fields
Funding is needed to ensure the UK remains a leader, not a follower.
While national policy is essential, personal habits matter immensely. Below are practical steps every AI user should adopt.
It is excellent at drafting, summarising, and brainstorming—but unreliable for facts unless independently verified.
Do not assume that a citation is real because it looks real.
Use:
NHS.uk for medical queries
GOV.UK for legal and tax information
academic databases for scholarly research
This includes:
health decisions
legal decisions
financial planning
technical engineering guidance
Red flags include:
overly specific answers to obscure questions
references you cannot find elsewhere
confident tone despite low familiarity
explanations that sound plausible but lack detail
Despite its flaws, ChatGPT remains extraordinarily useful—when used appropriately.
Its strengths include:
summarising long documents
brainstorming ideas
improving clarity of writing
suggesting alternatives and variations
drafting emails and reports
offering historical or conceptual overviews
generating creative text
Understanding its limits allows us to harness its power safely.
Researchers are exploring several promising directions.
Connecting AI to verified databases could reduce fabrication.
Curated datasets may decrease exposure to misinformation.
These “AI auditors” evaluate the model’s own output.
Future models may indicate uncertainty, much like human experts.
Better reinforcement learning can encourage honesty over fluency.
These innovations will help, but no future model will be perfect. AI truthfulness is a technical problem, a social problem, and a philosophical problem combined.
Britain is entering an era in which AI systems—flawed, powerful, and ubiquitous—will shape education, industry, politics, and daily life. The challenge is not simply to prevent errors. It is to build a society capable of navigating a world in which truth can be generated, distorted, or fabricated by machines.
ChatGPT’s errors are not evidence of malevolence. They are the predictable consequences of a tool designed for language, not truth. The responsibility lies with us—educators, journalists, policymakers, and citizens—to use the tool wisely.
If we approach AI with curiosity, caution, and critical thinking, it can become one of the most transformative technologies of our time. But if we treat it as infallible, we will stumble into a future where misinformation is automated, scaled, and invisible.
Britain must choose the smarter path.
And that path begins with understanding why, sometimes, ChatGPT makes things up.