Tech

How AI is driving courts crazy

Since 2026, more civil proceedings have ended up in the district court without a lawyer. Increasingly, the written documents come from AI. The result: longer, eloquent texts with dubious substance that judges and lawyers have to examine. A commentary analysis.

Since January 1, 2026, civil proceedings with an amount in dispute of up to 10,000 euros have been heard in the district courts. One might think that this would be a trivial matter, a technical change. However, more and more people are using artificial intelligence (AI) to formulate briefs.

This, together with the change in the amount in dispute, could change the civil process more profoundly than it seems at first glance. The question is how the overall quality of complaints, responses and pleadings changes in court proceedings.

AI at the district court: reform meets technology

By raising the limit in dispute to 10,000 euros, the legislature is pursuing a goal that, in my opinion, is justified: more cases should be heard in the local courts and decisions should be reached there more quickly and closer to the citizens.

The route to court – from disputed invoices to inadequate services to other smaller disputes – should be lower-threshold. This includes in particular what was already true before the reform in the local courts and what continues to apply: There is no obligation to be represented there by a lawyer; Things are different in the regional courts, where the so-called obligation to have a lawyer applies.

At the same time, however, something has changed that was probably not taken into account in this reform and could not have been taken into account due to a lack of knowledge of the connection: the existence, availability and quality of AI systems and their use by citizens.

Anyone who has a legal problem today can describe it to an AI like ChatGPT or Gemini in everyday language and receive a structured, legal-sounding text within seconds.

However, this changes people’s behavior, not only in general but also in court proceedings. Not because they are suddenly more legally savvy, but because the hurdle to even putting anything “legally acceptable” on paper has fallen dramatically.

Briefs are getting longer, not better

It should come as no surprise that AI is used by legal laypeople to create letters to the court. It is now easy to observe what happens when AI finds its way into bureaucratic or conflict-prone contexts.

A look at other areas is enough: In the real estate industry, for example, many property managers tell me during my lectures that letters from owners or tenants have become significantly longer.

This does not make them more precise or necessarily more factual, but it does make them more comprehensive. Because AI produces text, and it produces it willingly. As a property manager you then have to deal with this mass of text.

AI in the district court creates additional work for judges

If you transfer this pattern to civil proceedings before the district courts, you can foresee a development: lawsuits and responses will become more extensive if they are written by legal laypeople using AI instead of lawyers.

They will contain more unnecessary arguments, more legal terms and superfluous explanations, and more supposed references to case law and literature.

For legal laypeople, this seems like an upgrade of their own position. Above all, it means more work for judges. Because a court must not be impressed by the scope and power of language.

It must examine which of these is legally relevant. It has to recognize which arguments are valid and which are just masses of text. And it has to deal with every claimed source – regardless of whether it comes from a textbook or the statistical probability of a language model.

Risk: When conviction replaces law

There is a risk in this shift from quality to quantity. AI is excellent at formulating things convincingly. But she is not reliable in being correct. She enjoys making up content, which is known as “hallucinations.”

This becomes particularly relevant where it names specific judgments, references or opinions. Someone who is not properly trained, and this should apply to the majority of citizens, has the problem of recognizing where the errors lie in the mass of text. I think it will be very difficult.

For judges, however, this creates a much more serious, structural problem. Every decision cited, every doctrine asserted must be checked. And this check is currently rarely automated, but rather manual.

What may once have been a slim lawsuit with a clear legal question now often becomes a multi-page document with numerous secondary aspects. The actual legal question disappears in the text.

This means more effort for the court to uncover the core of the case. The danger is obvious: procedures could take longer, not shorter. The hoped-for efficiency of the reform is therefore counteracted by the volume of text.

New tasks for judges

If things turn out as I fear, the judiciary will inevitably have to adapt. Judges will have to engage more intensively with AI, not out of technical curiosity, but out of practical necessity.

If you want to understand how written documents are created using AI, you need to know how AI works, what its typical weaknesses are and what patterns it produces. But it’s not just about understanding how AI works.

It is no longer enough for judges to just read the content of the written submissions. It will increasingly be a matter of filtering text, recognizing relevance and identifying overextensions typical of AI.

In the long term, the judiciary will hardly be able to avoid using technical aids to structure written submissions, check judgments and literary sources and make volumes of text manageable. AI-upgrading the judiciary is no longer an option, but a necessity.

AI at the district court: positive and negative side effects

Of course, this development also has positive sides. More and more people will dare to assert their rights in court. Access to justice is becoming more real, not just theoretical. AI can help to organize thoughts and formulate concerns in an understandable way. That’s a win.

But this gain comes at a price. If the quality of the documents decreases, the efficiency of the entire system suffers. If courts have to spend more time sifting through masses of text and examining sources, this time is missing elsewhere. The civil process thrives on clarity and concentration on the essentials. There is a fear that AI will promote the opposite.

A system under pressure to adapt

Finally, it is also foreseeable that this development will not regulate itself. The limit in dispute will remain, but in my opinion it is not decisive. What will be more important is that AI keeps getting better.

The question is not whether the judiciary has to adapt, but rather how quickly and how consistently. Technical support, new ways of working and a deeper understanding of AI-generated texts are becoming a prerequisite for functioning legal proceedings.

At the same time, the role of the legal profession will also change. Anyone who previously hired a lawyer for a lawsuit, even in a dispute before a local court, will now increasingly ask themselves whether they can take the first step themselves if it is not absolutely necessary to hire a lawyer.

This consideration is understandable. But I think it is deceptive. Because AI does not replace experience, strategic assessment and a sense of risk, which is what distinguishes lawyers in particular. Precisely because briefs are easier to produce, the need to classify legal problems is increasing.

Conclusion: Dealing with AI is changing the civil process

I think that we will see a systemic change in district court proceedings due to the increasing use of AI by citizens. Increasing the amount in dispute will increase this.

However, if AI plays an increasingly larger role in civil proceedings, in my opinion the work of lawyers will – and must – shift: away from pure formulation to explaining, evaluating and filtering.

Lawyers are becoming more translators between technology, law and reality. You will have to explain why a good-sounding brief is not necessarily a good brief. And they are needed where AI reaches its limits.

The reform on January 1, 2026 marks less a break than a transition. The law remains the same. But the way this is discussed will change. The real challenge lies not in the use of AI, but in how it is handled.

Also interesting:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

kindly turn off ad blocker to browse freely