From court backlogs to regulation: the challenges of AI in justice systems

More than 79,600 criminal cases were stuck in the court backlog in England and Wales at the end of 2025. The Crown Court backlog, in fact, has remained at record levels since early 2023 and, according to the Ministry of Justice, could reach 100,000 cases by 2028.

Faced with overwhelmed courts and limited resources, governments are increasingly turning to artificial intelligence as a possible solution. AI is no longer a future concept for justice systems; it is already being used in courts around the world.

From document management and legal research to e-filing systems, chatbots, and remote hearings, judges, lawyers, and court staff are leveraging AI tools in their daily work. In jurisdictions struggling with congestion and delays, AI is often presented as a way to speed up proceedings and reduce administrative burdens.

But, the growing presence of AI in justice systems raises urgent questions. How should it be regulated? How far can automation go without undermining due process? And who is accountable when AI tools influence judicial outcomes?

In response to the use of AI in judicial systems, UNESCO launched its guidelines for the use of AI systems in courts and tribunals in December, 2025 at the Athens Roundtable on AI and the Rule of Law in London. 

The guidelines set out 15 principles aimed at ensuring that AI systems are developed and used ethically and in full respect of human rights, covering the entire lifecycle of AI tools used in courts.

Yet, preparedness remains limited. UNESCO’s 2024 global survey on the use of AI in judicial systems found that only 9% of surveyed judicial operators had received any training or information related to AI, despite 44% having already used AI tools in their work. At the same time, 73% of respondents said mandatory regulations and guidelines for AI in the judiciary are necessary.

According to Anca Radu, scientific expert at the Council of Europe’s Commission for the Efficiency of Justice, AI tools can help create better-informed parties, improve access to judicial information, and contribute to faster procedures.

Still, caution is widely emphasized. UN Special Rapporteur Margaret Satterthwaite has warned that AI should not be adopted “without careful assessment of its potential harms, how to mitigate those harms and whether other solutions would be less risky.”

Satterthwaite has stressed that the right to an independent and impartial tribunal requires access to a human judge, just as the right to counsel requires a human lawyer.

The risks associated with AI in judicial systems are significant: biased AI tools can lead to discriminatory outcomes, while automated or AI-assisted decision making often relies on opaque systems whose reasoning cannot be explained, undermining the right to a fair trial.

This so-called “black box” problem is central to due process concerns. As defined by IBM Think Staff Editor Matthew Kosinski, a black box AI system is one in which inputs and outputs are visible, but the internal decision-making process remains inaccessible. Without explainability, it becomes difficult to justify decisions or meaningfully apply them.

Judicial independence may also be affected by political influence or by private actors involved in designing, training, and deploying AI systems. Judges could face pressure to follow AI-generated recommendations, even when they conflict with their own legal reasoning.

Against this backdrop, Article 5 of the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law requires states to ensure that AI systems are not used in ways that undermine democratic institutions, including judicial independence and access to justice.

Recent cases highlight why these safeguards matter.

In June 2025, a senior UK judge warned that lawyers could face criminal sanctions if they relied on fictitious, AI-generated case law in court submissions. Dame Victoria Sharp, President of the King’s Bench Division, cautioned that widely available generative AI tools like ChatGPT “are not capable of conducting reliable legal research,” and that misuse could have “serious implications for the administration of justice and public confidence.”

Her comments followed the joint judgment in Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank, where lawyers submitted fictitious cases and quotations generated through AI tools. While the court stopped short of contempt proceedings, the lawyers were referred to their professional regulators.

Another landmark case emerged in 2020, even prior to the launch and massification of OpenAI’s ChatGPT, when the District Court of The Hague ruled against the Dutch government’s use of the System Risk Indication (SyRI), an algorithm designed to detect welfare fraud. 

As noted at the time, this was “one of the first judgments in the world addressing the human rights implications of the use of artificial intelligence in the public sector and states’ respective obligations to ensure transparency of AI processes.” The court found that the algorithm used to detect welfare fraud was biased toward foreigners.

An Amnesty International report later confirmed that the system reinforced existing institutional biases linking crime to race and ethnicity and operated without meaningful human oversight.

The message from these developments is clear: while AI may help improve efficiency and reduce court backlogs, its use in judicial systems must remain firmly anchored in human judgment and human rights standards. 

As courts continue to experiment with new technologies, careful regulation and restraint will be essential to ensure that justice is not only faster, but fair. Meanwhile, legislation around the world is only beginning to catch up, with the EU’s AI Act controversially serving as a universal benchmark.

Image source: Getty Images via Unsplash+