James_Sept-Oct_2025_web - Flipbook - Page 29
rtificial intelligence has
reshaped nearly
every profession it has
touched—
and the
legal realm is no exception. Once
hailed as a miracle tool for improving efficiency, generative AI is now
implicated in an alarming pattern
of misuse and misjudgment across
the legal system. The stakes are
no longer theoretical. Legal practitioners are facing sanctions. Judges are issuing error-filled orders.
And the profession’s reputation for
rigor and responsibility is under
active threat.
Judges Under AI’s Shadow
In July 2025, a federal judge in
Mississippi made headlines for issuing a temporary restraining order
that raised eyebrows far beyond
the usual legal commentary circles.
The order— meant to halt enforcement of a law limiting diversity programs— was riddled with factual
errors. It listed plaintiffs who were
not part of the case. It included
incorrect quotes from Mississippi
law. And most alarmingly, it cited
cases that did not appear to exist.
U.S. District Judge Henry T.
Wingate later withdrew the order,
but the damage had already been
done. In the following days, legal
scholars and journalists— including
Eugene Volokh and outlets like
Mississippi Today— publicly speculated whether AI had been used to
generate part of the flawed ruling.
Though Wingate has not confirmed the use of AI, the pattern
bears an uncanny resemblance to
errors generated by unverified use
of generative tools like ChatGPT.
Regardless of the tool involved, the
case illustrates how judicial integrity can be jeopardized when AI enters the courtroom unsupervised.
Morgan were sanctioned in federal
court for citing eight nonexistent
cases in motions in limine. Similarly, attorneys from Butler Snow LLP
were disqualified from a case after
submitting AI-generated briefs
containing phantom citations.
The penalties vary: fines,
disqualification, mandated re-education. In one bankruptcy case,
attorney Thomas Nield was fined
$5,500 and ordered to complete AI
training after submitting a motion full of fake citations. Another
Massachusetts attorney was fined
$2,000 for similar misconduct.
The trend is unmistakable: AI
hallucinations, if unchecked, are
no longer just a novelty— they’re
malpractice.
The Phantom Case Epidemic
Perhaps the most infamous
case of AI misuse remains Mata v.
Avianca, Inc. In 2023, a New York
attorney submitted a legal brief
citing more than half a dozen fake
cases— fabricated entirely by an
AI chatbot. The result was swift
and severe: public embarrassment,
sanctions, and a judicial order that
has since become a touchstone in
discussions about AI hallucination
in the legal field.
This wasn’t an isolated incident. In early 2025, three attorneys
from the prominent firm Morgan &
Trust, but Verify
In response to the growing
crisis, the American Bar Association issued Formal Opinion 512
in July 2024. It emphasized that
while AI tools can support legal
work, they must never be used in
place of critical human oversight.
S E PT E M B E R /O C TO BER 2025
29