KNOW YOUR RIGHTS AI AND LEGAL PROCEEDINGS
AI is quickly becoming a part of all our lives, whether you are reading news articles, scrolling through social media or using Copilot to organise your diary, and the legal profession is no different. Many people are turning to ChatGPT to help them through complex legal proceedings as an alternative to hiring lawyers.
We are now frequently seeing defendants represent- ing themselves in court purely with the aid of ChatGPT. In most cases, however, those defendants are not merely using AI to assist them with legal submissions, but are actively getting AI to write their legal sub- missions for them.
Whilst AI can be of assistance in providing guidance and information, its writing of submissions is causing problems. It frequently gets the law wrong, cites irrelevant legislation and, on some occasions, has completely fabricated cases.
In a recent judgment handed down by the Court of Appeal, a mother accepted in open court that she had used artificial intelligence to assist her in preparing a “lengthy” skeleton argument citing authorities, some of which were not relevant at all, whilst others were entirely made up. This landed her in hot water with the judge, but fortunately in this case the court was sympathetic to her situation, with the judge stating:
“I absolve the mother of any intention to mislead the court. Litigants in person are in a difficult position putting forward legal arguments.”
But others have not been so lucky. In another case from earlier this year, DPP v Micura [2026], the High Court criticised a defendant for making his sub- missions through ChatGPT:
“[He] was a litigant in person. He produced at the hearing short written submissions which he freely accepted had been in part written by ChatGPT. Those submissions referred to three cases. It transpired, after some investigation by the court, that one of those cases was cited as being in the Court of Appeal but was actually a decision of the High Court. The second had the wrong reference but did appear to be a genuine case. The third did not appear to exist. This is an example of the
68
extreme caution needed to be exercised when receiving submissions from a litigant in person which purport to refer to previous authorities.
I note that some of the respondent’s submissions were simply wrong, presumably because ChatGPT had not merely made up the cases but misstated what the genuine cases said.”
He lost his case.
And these are just two examples of many similar judgments being handed down every day. The Guardian reported earlier this year that in a case where claimants made an £89m damages claim against Qatar National Bank, 18 of their 45 cited cases were fictitious. At the same time, employment tribunals are being clogged up with cases where AI has fabricated authorities to validate an employee’s grievance. In one case, a judge ruled that nine authorities put before the First-tier Tribunal were false.
But it is not just losing cases that people need to be wary of. Anyone involved in legal proceedings needs to remember that legal documents and evidence often contain sensitive and confidential information which cannot be shared publicly. In UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026], the court held that:
“Uploading confidential documents into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality.”
This case is of particular concern. Any person who has used open AI to prepare documents may be in trouble. At best, this could be considered a breach of confidentiality. At worst, in certain circumstances it might amount to a criminal offence which could see people end up in prison.
So why does AI get it so wrong?
There is no easy answer to that. AI will often obtain information from various websites, and the problem with that - especially when it comes to a complex area of law such as road traffic offences - is that much of the material available online is incorrect. Many people misunderstand cases, read judgments incorrectly, or simply misinterpret them. When a case is in the news,
MAY 2026 PHTM
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76