A federal judge in California has issued $31,000 in sanctions against two law firms after discovering that a supplemental legal brief submitted in a civil case contained numerous fake citations, generated by AI. The ruling has reignited concerns about the AI Misuse in legal briefs, especially when attorneys fail to fact-check the tools they rely on.
Judge Michael Wilner, presiding over the case involving a lawsuit against State Farm, was initially persuaded by what appeared to be legitimate legal precedent. But after looking into the sources, he found they simply didn’t exist. “That’s scary,” he wrote in his decision, emphasizing that he nearly cited those fictitious cases in a judicial order. “No reasonably competent attorney should out-source research and writing to AI,” he added.
The AI-generated content came from an outline created by a plaintiff’s lawyer using Google Gemini and Westlaw’s CoCounsel tool. That outline, which included fake case law, was passed on to the prominent law firm K&L Gates, who then used it to craft the final brief. Shockingly, no one at either firm took the time to verify the sources before filing the document in court.
Once Judge Wilner flagged two citations as fake and asked the firm for clarification, the revised version of the brief revealed even more fabricated cases and misleading quotations. This led him to issue an Order to Show Cause. The lawyers eventually admitted, under oath, that AI tools were used and that no human had reviewed the AI-generated legal research for accuracy before filing.
Fake Citations, Real Consequences
This incident isn’t isolated. It’s part of a growing trend that’s raising alarms across the legal world. Just months ago, former Trump lawyer Michael Cohen faced similar backlash after submitting a filing that included imaginary cases generated by Google Gemini, which he had confused for a legal search engine. In another instance, lawyers suing a Colombian airline filed briefs containing citations invented by ChatGPT.
Judge Wilner’s reaction was blunt. “The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong,” he stated. More troubling, he added, was the fact that the flawed document was handed off to another legal team without any warning about its origins—essentially putting other attorneys at risk of professional misconduct.
Legal scholars like Eric Goldman and Blake Reid shared the ruling on Bluesky, calling it a wake-up call for law firms. The case shows just how easily unverified AI-generated content can enter the legal process—and how close it can come to shaping judicial decisions based on fiction rather than fact.
The court’s sanctions serve as a stern reminder: AI tools may be convenient, but they are not infallible. Without rigorous human oversight, the line between automation and malpractice can quickly blur.