AI in the Court: Deputy Head of Civil Justice finds AI ‘Jolly useful’

Earlier this year it was publicised that the Singapore Courts had signed an agreement to test generative AI starting with the small claims tribunal and more recently a Judge of the Court of Appeal and the Deputy Head of Civil Justice Lord Justice Birss has said how he found AI software “jolly useful” when drafting a judgment. Here, Dispute Resolution Partner at Beyond Corporate, Alistair Gregory looks at the advancement of AI in various parts of the legal sector, and questions whether we could really have justice dispensed by a computer.

 

We’re almost there already.

Even legal dinosaurs that roam the Court rooms will have noticed that AI has already established a place in the legal sector. AI is helping in many areas including:-

  • technology-assisted review used to speed up the e-disclosure process.
  • accelerating the due diligence exercise.
  • by answering a simple questionnaire bespoke contracts can be created with advanced AI software being used to review contracts and identify any errors or omissions.
  • advances in voice recognition can now conduct detailed searches of documents via voice command or book appointments, Judges can generate PDF transcriptions from audio recordings then search/interrogate them in producing a judgment.
  • practice management systems automatically carry out functions which once were the preserve of office staff.

As our confidence grows in the AI technology it is not beyond comprehension that AI could extend to some or more of the qualities that a Judge possesses in terms of fairness, independence, rationality, impartiality reasonableness, and having a good knowledge of the law. To switch one’s mind off to the possibility may put you in the category of those in history with a horse and cart claiming motor vehicles will not catch on.

 

Surely AI can’t be biased?

One could be forgiven for thinking that if AI is computer based and if it provides reliable mathematical results, that it would be unaffected by bias. The reality though is that AI systems must be biased given that the originating source material/data is from humans so is likely to be polluted by inherent biases and potential inequalities. So, if we accept AI’s flaws, then which values should be applied by the AI and who should decide this? Perhaps the answer lies in the Court and Court system permanently monitoring and stress-testing the AI. With the ever-increasing burden put upon the Court’s resources, should this kind of work to be the responsibility of the Court or perhaps better placed to be one for software developers? The answer to this must found soon as this is not the only sector where the question needs answering. For example, in the case of self-driving cars, who takes the blame in the event of an accident – the driver, the car manufacturer or perhaps the AI / software developer? Time will tell.

Share:

Access to Justice

Forgive me if this sounds political but given access to justice is often needed by those that can least afford it, would AI bring a new era of extended access to justice, or could it be used to restrict it? When the ‘little guy’ takes on a big corporation there is usually an obvious inequality of arms in terms of legal representation but could AI step into redress the balance? The answers to these issues are not obvious at the moment and it is not clear who should be most concerned.

 

Does regulation not cure the flaws in AI?

If control and power is in the hands of those that develop AI, then does that not leave us all exposed to their agenda whether that agenda be intentional or otherwise and should this not be regulated? If that is right, then when is that needed, what safeguards are needed and importantly who should be the regulator and how will be they be accountable? Given the pace at which AI is being introduced one would hope that question will be answered sooner rather than later.

 

So will AI bring costs saving?

Advanced IT projects are not new to the Court system through public private partnerships. Probably the most obvious example is the Money Claim Online system. Of course, a key consideration in any public procurement is value for money. So, if the lowest price tender is to be selected does that mean that the Court system will drift into using the lower price, and possibly lower quality, AI tools? With such a seismic change likely to be introduced for public services and importantly our democracy should the justice system be measured against value for money?

 

Human touch

If we surrender justice, even in part, to AI, we lose the human nature of what has underpinned our system from the very start. When the judgment in a case comes from a robot empathy with the parties cannot be the same as from a Judge. For some, the use of algorithms provides certainty and efficiency but for others it undermines fundamental rights, transparency, public values and accountability. Of course, if do surrender fully to AI then how could one appeal a judgment? perhaps that appeal would go to another robot?

 

If you have any questions or require any further advice on this topic, get in touch with our specialist teams today at hello@beyondcorporate.co.uk

[This blog is intended to give general information only and is not intended to apply to specific circumstances. The contents of this blog should not be regarded as legal advice and should not be relied upon as such. Readers are advised to seek specific legal advice.]

By Alistair Gregory