The judiciary, like the legal system in general, is considered one of the largest “text processing industries.” Language, documents, and texts are the raw material of legal and judicial work. That data plays a crucial role in the judicial system, helping investigators, lawyers and judges fit together the circumstances surrounding a particular case in an effort to see that justice is served.
As such, the judiciary has long been a field ripe for the use of technologies like automation to support the processing of documents. Efforts to further expand the use of emerging technologies to address this ongoing need put responsible artificial intelligence (AI) at the center of possible solutions.
The legal system has undergone a big transformation thanks to the adoption of technology. With digitization adopted by law firms and court systems, a trove of data in the form of court opinions, statutes, regulations, books, practice guides, law reviews, legal white papers and news reports are available to be used to train both traditional and generative AI foundation models by judicial agencies. These models could then be used by the court’s staff to help organize, search and summarize decades worth of legal text.
As the use cases of AI and other technologies continue to permeate the judiciary, judges, lawyers and staff must continue to be at the center of all decisions. Courts, and the legal system as a whole, must also be on alert for bias in data and algorithms that might have the potential to perpetuate inequalities those same courts seek to root out by deploying systems rooted on principles of trustworthy AI such as transparency and explainability, so all stakeholders understand how the system was trained, how it works and what’s the scope of its use.
Germany’s judicial system leads the way in AI
As the use cases of AI and generative AI in the judiciary continue to expand, some countries like Germany offer a few examples of how this might work as several of its jurisdictions throughout are experimenting with the technology as a means of supporting legal professionals and improving their service.
The demand for an automated solution arrives as Germany’s government has mandated that electronic file management be implemented by courts in all civil, administrative, social and criminal proceedings by 2026 as part of digitalization goals established by the European Union (EU). Now that all pleadings are in electronic form, the data can be used in new ways. AI, with its ability to understand natural language, allows the core work of the judiciary to completely rethink the production processes of analyzing, creating and archiving texts.
AI helps German courts manage backlog
One of the most effective uses of AI in the judiciary today is simply helping courts cope with the large number of cases they handle. In recent years, German courts have received an unprecedented flood of proceedings that have overwhelmed the judiciary and resulted in delayed proceedings, hearings, and outcomes. At the Stuttgart Higher Regional Court in Frankfurt, judges working on these cases were soon faced with a backlog of more than 10,000 cases. Unfortunately, the courts didn’t have any technology initially to cope with the volume of cases. Most of their work was done manually and was highly repetitive. The judges have to spend hours reading long electronic pleading files in the proceedings. The documents could be hundreds of pages and usually differ in only a few case-specific features.
The Ministry of Justice in Baden-Württemberg recommended using AI with natural language understanding (NLU) and other capabilities to help categorize each case into the different case groups they were handling. The courts needed a transparent, traceable system that protected data. IBM ® created an AI assistant named OLGA that offered case categorization, extracted metadata and could help bring cases to faster resolution. With OLGA, judges and clerks can sift through thousands of documents faster and use specific search criteria to find relevant information from various documents. Additionally, the system would provide information on the lawsuit to contextualize the information that surfaced from the search. The algorithm preserved the case history and gave users a comprehensive view of all the information for the case and where it originated. The judges are relieved of highly repetitive tasks and can concentrate on the complex issues, and the courts report they envision the processing time of cases can potentially be reduced by over 50%.
Frauke helps German courts with air passenger suits
Elsewhere in Germany, IBM worked with the Frankfurt District Court to successfully test an AI system known as “Frauke” (Frankfurt Judgment Configurator Electronic) for air passenger rights lawsuits. Between 10,000 and 15,000 cases related to passenger rights (e.g. related to delays) end up at the Frankfurt District Court every year. The court asked for help for the process of drafting the judgements. This was a very laborious and repetitive task for the judges, who had to collect the relevant data and, in the end, repeatedly write almost identical judgements.
In a proof-of-concept last year, Frauke extracted the case-individual data (including flight number and delay time) from the pleadings and in accordance to the judge’s verdict it has helped expedite the drafting of judgement letters by using pre-written text modules. So far, by using this technology, Frauke has been able to significantly reduce the processing time in the preparation of judgments.
Legal professionals must remain in the loop
These case studies provide real-world examples for the many benefits of incorporating AI into judicial proceedings to aide in the processing of documents and automation of manual tasks. At the same time, there is broad consensus on limits to AI’s use in the judiciary.
Some lawyers have already begun taking advantage of the technology to automate the creation of legal briefs, prompting judges to demand the subsequent disclosure that generative AI has been used. As legal systems grapple with the new technology’s pros and cons, lawyers relying on generative AI might also have to certify it hasn’t resulted in the disclosure of confidential or propriety client information.
The use of any type of AI by public entities, including the judiciary, should be anchored on the fundamental properties of trustworthy AI used by IBM. Explainability will play a key role. For example, to demonstrate that generative AI is used in a responsible way, it is necessary to offer the ability to trace the steps the system takes in categorizing, summarizing, and comparing documents.