A California lawyer submitted a legal brief written by AI that was packed full of fake citations. He has been fined.
One of the more pervasive myths pushed by mainstream media is that generative AI has been perfected and that it’s only a question of how long it’ll take before AI takes over everyone’s jobs. Sometimes, this is the result of reporters watching too many Hollywood movies and presuming that if it happened in a sci-fi movie, then it must be what’s going on in real life. Other times, it’s likely the results of companies pushing ridiculous claims through obvious scam artists trying to project that their product has been perfected when reality clearly demonstrates otherwise.
Of course, that doesn’t stop mainstream media from gobbling all of this up and conjuring up ridiculously stupid things like how AI is going to cause humanity to go extinct. In this area, mainstream media can be incredibly gullible.
The unfortunate side-effect of the gullible media believing this crap is that some people will look at this obviously false reports and honestly believe that generative AI is perfect in every way and can produce anything you want it to produce with the highest quality imaginable. That has led to some very real world circumstances. Examples include AI inserting fake citations into legal briefs, the CNET scandal, the Gannet scandal, bad “journalism” predictions, fake news stories, more fake stories, Google recommending people eating rocks, the 15% success rate story, bad chess tactics, the Chicago Sun-Times scandal, a Canadian team submitting fake legal citations in their legal briefs, other attorneys submitting fake citation filled legal documents, the 91% failure rate story, and AI deleting user data.
Of course, it’s stories like these that the mainstream media generally tends to quickly sweep under the rug while continuing to push the narrative that generative AI is this ominous threat of perfected technology that will be taking over people’s jobs. So, it doesn’t come as a surprise that we’re seeing even more fallout from people believing that generative AI is this perfected technology that will make your work a breeze to produce. In California, a lawyer has been fined $10,000 for submitting a legal brief with citations to almost all fake cases. From PCMag:
The California lawyer says he wrote the brief before running it through AI tools to ‘enhance’ it, but did not read it again before submitting, and claims he did not know hallucinations were a thing.
A California attorney made an expensive mistake when trying to cut corners on a legal brief.
Amir Mostafavi submitted an appeal in an employment-related case, but 21 of the 23 cases he cited to support his argument were fake—hallucinated by AI—or included phony quotes from existing cases. Judge Lee Smalley Edmon sanctioned Mostafavi and fined him $10,000.
“To state the obvious, it is a fundamental duty of attorneys to read the legal authorities they cite in appellate briefs,” Judge Edmon says in a strongly worded opinion. “Plainly, counsel did not read the cases he cited before filing his appellate briefs: Had he read them, he would have discovered, as we did, that the cases did not contain the language he purported to quote, did not support the propositions for which they were cited, or did not exist.”
Mostafavi claims he wrote the first draft of the brief but then used AI tools such as ChatGPT, Grok, Gemini, and Claude to “enhance” it. He did not read through the final version before filing it, and says he should not be fined because he did not know AI tools can make up information.
The facepalm you could give for that excuse can be fatal if it was a proportional response. For one, it is up to the attorney to, at minimum, check the freaking work before submitting it to make sure it is acceptable. Your name is on the work, after all, so you own up to it. For another, AI hallucinations have been a well known problem with AI for years now. Anyone out there who claims to be following AI in any capacity who isn’t aware of AI hallucinations is only outing themselves as someone who doesn’t actually understand AI to any reasonable degree. I mean, what’s next? Suggesting that you didn’t know it would be a bad idea to watermark your legal briefs with giant purple dragons? puts finger to ear I just received word that this did actually happen earlier this year. Uh, never mind.
At any rate, it is interesting to know that there is at least one instance where a lawyer got fined for using AI to generate legal briefs for them. Given that there are a lot of not so bright people out there, I fully expect to hear more cases of AI burning people trying to find a shortcut to getting their work done going forward.
Drew Wilson on Mastodon, Twitter and Facebook.
Discover more from Freezenet.ca
Subscribe to get the latest posts sent to your email.


unrelated to this topic but thought I should let you know s-209’s committee meetings begin Wednesday.
I’m actually surprised it’s going to committee. Now to figure out if I missed the boat in filing a submission.
How confident are we that something will actually change this time around before S209 reaches House of Commons? Still has to go through Senate committee, and there was some hesitation iirc last time. It like we are relying on Libs and NDP to vote against S209 vs CPC and Bloc, while relying on CPC to vote against Bill C2 (and I dont know CPC overall position).
honestly not really confident at all. but we do have new committee members this year and they seem to be bringing in privacy specialists and such so…who knows?
that being said I wonder how long these meetings will take this time. last time the senate committee meetings took almost a year with several mettings over several months.