This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minutes read

Fake cases and real consequences: the risks of relying on AI in litigation

The recent case of Harber v HMRC in the First-tier Tax Tribunal (the FTT) has resulted in considerably more commentary over the last couple of months than would be expected from a “basic” tax appeal against “failure to notify” penalties issued by HMRC. It has attracted interest as an example of the legal profession becoming unstuck as a result of the use of generative AI technology in legal proceedings. While there have been other examples from around the world, this is one of the first cases in the UK. 

By way of recap, Mrs Harber (a litigant in person) made submissions referencing nine FTT decisions which she claimed supported her defence of “reasonable excuse”. The names of the cases cited did not, on their face, suggest anything suspicious – the names used were pointedly common, and the facts of the cases seemingly realistic – such that at first glance they appeared to be plausible case precedents. However, neither HMRC nor the Tribunal were able to identify the FTT decisions matching the short case names and summary of facts included in her submissions. 

The Claimant was asked whether the cases had been sourced by an AI system. She responded that this was “possible” as her submissions had been prepared by “a friend in a solicitor’s office”. 

The Tribunal was then required to ascertain whether the cases were in fact genuine FTT judgments or AI generated and did so by carrying out a review of the FTT websites and the British and Irish Legal Information Institute (BAILII). 

Whilst tasked with a rather bizarre search, the Tribunal was assisted by Mata v Avianca, a US case in which barristers sought to rely on illegitimate ChatGPT generated cases. In reviewing the judgments, Judge Castel identified “stylistic and reasoning flaws” which he purported “do not generally appear in decisions issued by United States Courts of Appeals”. 

Aside from providing an arch observation about the perils of AI, Harber is worth reflecting on a little more deeply. It is interesting that, although it must have been obvious that there was something amiss – there were no citations, the names were similar to other important tax cases and it is unlikely that cases of significance would have been at the fingertips of a “friend” in a solicitor’s office but unknown to HMRC’s Counsel in the case – it was difficult to prove. Similarly, the comment from Judge Castel represents a vague yardstick for determining a real decision. It is also possible for judges to suffer from stylistic and reasoning flaws.

What Harber demonstrates is that, frequently, proper process relies not so much on rules but on the constructive engagement of all sides to a dispute. There is a reason why the Courts are keen to promote the “overriding objective”, to co-operate with the Tribunal and to enable it to deal with cases “fairly” and “justly”. In the end, the rules work because they are applied with an eye to the ultimate goal.

Given the increasing abilities of AI, it may be a lazy assumption that notions of co-operating “fairly” and “justly” will remain the preserve of humans but, for now, that remains the case.  Leaving matters to AI has the potential to undermine proceedings. The Tribunal commented that whilst citing invented judgments had less of an impact in this case (as the law on reasonable excuse is well-settled) in general it, “…causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined.”

On the place of AI in the courtroom more generally, the Tribunal agreed with Judge Castel in the American proceedings, who said that the practice of relying on AI “promotes cynicism about the legal profession” and threatens the authoritative value of case precedent. 

This issue will inevitably become increasingly relevant as AI becomes more widely accessible.  Later generations of AI will all too soon render the current systems obsolete. In its 2023 Risk Outlook Report on the use of AI in the legal market, the Solicitors Regulation Authority (SRA) spotlights incidents where AI drafted legal arguments have included non-existent cases and warn that such errors may lead to miscarriages of justice. 

These are timely reminders that, where AI is used, it should be used responsibly and all outputs should be carefully checked for accuracy before relying on them. AI should be viewed as a tool to augment, not replace, lawyers. With that in mind, law firms are increasingly looking at ways of using generative AI to support lawyers with time-consuming tasks, for example, helping to draft a witness statement, summarising a large body of text to assist with a review, providing a starting point for presentations and helping to compose emails and memoranda. The courts themselves are also investigating these opportunities and a cross-jurisdictional judicial group recently provided a guidance note for judicial office holders regarding the use of AI.

In each case, human intervention is crucial and the guidance note stresses “accountability” and “responsibility”. This is not simply because AI is not yet “good enough” and makes errors, but because the very nature of the legal process, perhaps ironically, relies on the discretion and judgment of humans to see the “overriding objective” as much as it does the rules. 

An appellant seeking to rely on AI generated material could risk having their appeal struck out if the Tribunal considers that it is contrary to the overriding objective. In this instance, the Tribunal looked past what appeared to be Mrs Harber’s honest mistake and proceeded to apply the correct legal principles for reasonable excuse set out in the leading authority of Christine Perrin v HMRC. 

However, when the novelty of a case like Harber has worn off, Courts may find that they need to protect the integrity of the system more forcefully. In that case, future appellants attempting to rely on AI-generated material – without an eye to the “fair” and “just” progress of proceedings – may not get off as easily. 

What is certain in all of this is that AI will become a feature of litigation going forward, but parties should consider it suspicious if they are presented with multiple cases on point that wholly support their arguments. As all human lawyers know, this is far too good to be true!

Tags

lawtech, litigation, tax, tax investigations and disputes, blog, ai, generative ai