The number of UK court cases involving AI-generated fictitious legal citations has reached 60, with judges increasingly naming and shaming legal professionals who present fabricated cases to courts — and warning that the most serious offenders could face criminal charges for contempt of court or perverting the course of justice.
The latest case to attract judicial censure involved Layla Parsons, an unregistered barrister and therapist, who was ordered to be named by Recorder Howard at Bournemouth Family Court after presenting a skeleton argument containing four non-existent cases or propositions in Children Act proceedings. Parsons admitted using a widely known AI tool and apologised for inadvertently misleading the court, but the recorder emphasised the public interest in naming her, particularly as she offered paid legal services.
The case is one of a growing number tracked by legal researcher Matthew Lee, whose database now records 60 UK instances where courts have explicitly found or implied reliance on AI-hallucinated content — up from 38 just weeks ago. Globally, the database records 854 cases in the USA and hundreds more across other jurisdictions.
Dame Victoria Sharp, President of the King's Bench Division, issued a formal warning to lawyers last year that those who submit fictitious AI-generated cases could face criminal charges, and courts have since issued wasted costs orders and referred multiple barristers and solicitors to the Bar Standards Board and Solicitors Regulation Authority.
The phenomenon of AI "hallucinations" — where large language models generate plausible but entirely fabricated legal citations, case names, and propositions — has become one of the most pressing challenges facing the legal profession. Widely used tools including ChatGPT have been implicated in multiple cases.
"AI tools are a poor way to conduct research for new, unverified information," guidance from the Courts and Tribunals Judiciary states. "Legal representatives bear the ultimate responsibility for the accuracy of material presented to court."
The Bar Council has published guidance warning against AI-generated content that misleads the court, and the SRA has flagged the issue in its risk outlook reports. Proposed solutions include ringfenced AI research tools, mandatory disclosure of AI use in pleadings, and requirements for lawyers to maintain basic legal research skills independently of AI assistance.




