>> Continued From the Previous Page <<
Judges Blame Their Staff — and AI
When Grassley demanded an explanation, both judges blamed their staff.
Neals claimed a law school intern used ChatGPT “without authorization,” insisting his chambers had a policy against AI-generated work.
Wingate pointed fingers at a law clerk who used Perplexity AI as a “foundational drafting assistant,” calling it a “lapse in human oversight.”
But Grassley and the public saw right through it. A judge’s signature is the final approval. It doesn’t matter who typed the words — it’s the judge’s duty to verify them.
These weren’t small clerical slip-ups; they were glaring fabrications that any competent judge should have caught.
Grassley Calls Out the Double Standard
For months, judges have been hammering attorneys who used AI irresponsibly.
A federal judge in Alabama recently sanctioned three lawyers and referred them to the bar for using ChatGPT-generated citations.
A California appeals court fined a lawyer and warned that “no brief, pleading, motion, or any other paper filed in any court should contain any citations” not personally verified.
Morgan & Morgan, one of America’s largest firms, was fined $5,000 after filing a motion with eight nonexistent cases.
Another Texas lawyer had to take continuing-education courses and pay a $2,000 fine for the same reason.
But when judges do it? No fines. No sanctions. Just apologies and vague promises of “enhanced review processes.”
Grassley’s reaction was bluntly diplomatic:
“Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again.”
Between the lines, Grassley was sending a clear message — judges shouldn’t get special treatment for the same offenses they punish others for.
The Judiciary Still Has No AI Rules
The Administrative Office of the U.S. Courts told Grassley that it set up an AI Task Force that issued “interim guidance” in July 2025.
The so-called guidance doesn’t require anything. Judges are simply told to “consider” disclosing AI use and to be “wary” of delegating judicial work to machines. There are no mandatory rules, no penalties, and no enforcement.
So while lawyers get punished for AI hallucinations, judges can quietly rely on ChatGPT-drafted orders that reshape real cases — with zero consequences.
Grassley Sounds the Alarm
Grassley put it plainly:
“We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy.”
This wasn’t just about technology. It was about accountability.
Judges who sign orders they haven’t read are guilty of something worse than using AI — they’re guilty of neglecting their duty.
The senator’s warning is unmistakable: the AI excuse won’t save anyone next time.
If the courts expect lawyers to uphold truth and accuracy, the same must apply to the bench.
Grassley just reminded every judge in America — the Senate is watching.




