Judges are cautiously adopting AI to speed up court proceedings, but as hallucinations creep into rulings, experts warn of a looming legitimacy crisis for the U.S. justice system. (Source: Image by RR)

Judges Experiment with AI to Streamline Workloads but Struggle with Errors

The U.S. legal system has become an experimental ground for generative AI, revealing both its potential and its flaws. Early embarrassments emerged when lawyers submitted filings with references to cases that never existed. Experts themselves have stumbled too, with sworn testimony containing hallucinations and errors. Now, judges are joining the AI experiment, attempting to speed up legal research, draft orders and summarize cases. But when mistakes appear in judicial rulings, the consequences are far greater, raising serious questions about accountability.

Judges, as noted in technologyreview.com, are cautiously testing AI’s limits, often distinguishing between “safe” uses like summarizing documents and riskier applications that require judgment. Texas federal judge Xavier Rodriguez, who has studied AI since 2018, uses it for timelines and hearing preparation, comparing its output to the work of a junior clerk. New guidelines from the Sedona Conference reflect this view, encouraging research and organizational tasks while warning that no tool has eliminated hallucinations.

Some judges, like California magistrate Allison Goddard, have embraced AI more openly, calling it a “thought partner.” She uses ChatGPT, Claude, and other models to organize filings, generate questions, and streamline lengthy documents. Yet she avoids employing AI in sensitive criminal cases and turns to law-specific tools for authoritative tasks. This approach highlights the growing divide: some judges see AI as a necessary tool to manage overwhelming caseloads, while others fear reliance undermines judicial integrity.

Critics, including Louisiana appellate judge Scott Schlegel, warn of a “crisis waiting to happen.” Unlike lawyers, whose AI-induced blunders can be sanctioned or dismissed, judges’ errors become binding law and are far harder to retract. Recent cases where AI-tainted rulings went uncorrected underscore the danger of eroding public trust in the courts. As AI increasingly permeates the justice system, the stakes rise: its efficiency could modernize courtrooms, but its mistakes threaten the very legitimacy of justice itself.

read more at technologyreview.com