Technology 2 min read

UK Barristers Named and Shamed as AI Hallucination Cases in Courts Reach 60 — and Rising

Titanic NewsSaturday, 4 April 202621 views
UK Barristers Named and Shamed as AI Hallucination Cases in Courts Reach 60 — and Rising

The number of UK court cases involving AI-generated fictitious legal citations has reached 60, with judges increasingly naming and shaming legal professionals who present fabricated cases to courts — and warning that the most serious offenders could face criminal charges for contempt of court or perverting the course of justice.

The latest case to attract judicial censure involved Layla Parsons, an unregistered barrister and therapist, who was ordered to be named by Recorder Howard at Bournemouth Family Court after presenting a skeleton argument containing four non-existent cases or propositions in Children Act proceedings. Parsons admitted using a widely known AI tool and apologised for inadvertently misleading the court, but the recorder emphasised the public interest in naming her, particularly as she offered paid legal services.

The case is one of a growing number tracked by legal researcher Matthew Lee, whose database now records 60 UK instances where courts have explicitly found or implied reliance on AI-hallucinated content — up from 38 just weeks ago. Globally, the database records 854 cases in the USA and hundreds more across other jurisdictions.

Dame Victoria Sharp, President of the King's Bench Division, issued a formal warning to lawyers last year that those who submit fictitious AI-generated cases could face criminal charges, and courts have since issued wasted costs orders and referred multiple barristers and solicitors to the Bar Standards Board and Solicitors Regulation Authority.

The phenomenon of AI "hallucinations" — where large language models generate plausible but entirely fabricated legal citations, case names, and propositions — has become one of the most pressing challenges facing the legal profession. Widely used tools including ChatGPT have been implicated in multiple cases.

"AI tools are a poor way to conduct research for new, unverified information," guidance from the Courts and Tribunals Judiciary states. "Legal representatives bear the ultimate responsibility for the accuracy of material presented to court."

The Bar Council has published guidance warning against AI-generated content that misleads the court, and the SRA has flagged the issue in its risk outlook reports. Proposed solutions include ringfenced AI research tools, mandatory disclosure of AI use in pleadings, and requirements for lawyers to maintain basic legal research skills independently of AI assistance.

What's Your Take?

Share:

Related Stories

OpenAI Halts Stargate UK Data Centre Amid Energy Costs and Regulatory Uncertainty
Technology

OpenAI Halts Stargate UK Data Centre Amid Energy Costs and Regulatory Uncertainty

OpenAI has halted its Stargate UK data centre project, citing the UK's high industrial electricity prices and regulatory uncertainty over AI training data as key obstacles. The decision is a significant setback for the government's ambitions to build sovereign AI infrastructure in Britain and is a blow to London-based data centre developer Nscale.

Titanic News
3 min read11 Apr 2026
UK Cyber Agency Exposes Russian Military Hackers Hijacking Routers for Mass Surveillance
Technology

UK Cyber Agency Exposes Russian Military Hackers Hijacking Routers for Mass Surveillance

The NCSC has revealed that APT28, a Russian military intelligence unit, has been exploiting vulnerable routers across the UK and allied nations to intercept internet traffic and conduct large-scale cyber espionage operations.

Titanic News
2 min read10 Apr 2026
OpenAI Puts Stargate UK on Hold in Blow to Britain's AI Ambitions
Technology

OpenAI Puts Stargate UK on Hold in Blow to Britain's AI Ambitions

OpenAI has paused its Stargate UK data centre project, citing concerns over British copyright rules and high energy prices, in a significant blow to the government's AI ambitions. The delay raises questions about the UK's competitiveness as an AI investment destination. The UK AI Bill, which could address some of these issues, has itself been delayed until at least May 2026.

Titanic News
3 min read10 Apr 2026
UK AI Bill Expected After King's Speech as Ofcom Tightens Online Safety Enforcement
Technology

UK AI Bill Expected After King's Speech as Ofcom Tightens Online Safety Enforcement

The UK government is expected to introduce an AI Bill following the King's Speech in May, focusing on regulating powerful frontier AI models and addressing AI copyright issues. Meanwhile, Ofcom is actively enforcing the Online Safety Act against harmful AI applications, having fined an AI deepfake site in November and launched a new investigation into an AI chatbot in January.

Titanic News
3 min read10 Apr 2026