AI Hallucination Sinks Yet Another Court Case

The long storied tradition of AI being absolutely terrible at everything continues with a hallucination in a court document.

It’s no secret that AI is generally hot garbage. We’ve seen this time and time again. Examples of this include lawyers getting in trouble for fake AI inserted citations in legal briefs, the CNET scandal, the Gannet Scandal, bad “journalism” predictions, fake news stories, more fake stories, Google recommending people eating rocks, the 15% success rate story, bad chess tactics, the Chicago Sun-Times scandal, a Canadian team submitting fake legal citations in their legal briefs, other attorneys submitting fake citation filled legal documents, the 91% failure rate story, AI deleting user data, the lawyer who got fined $10,000 over a bogus AI written legal brief, AI killing workplace productivity with workslop, AI having an 81% failure rate in summarizing news content, AI Overview giving out bad health advice, AI only being able to successfully complete 2.5% of commission work successfully at best, and AI slowing software development down by 19%.

Yet, despite that being the reality of AI, there’s a continued cult-like mentality that AI is perfect in every way and is practically on the verge of taking over pretty much every conceivable job out there. Never mind that this has been the promise since at least 2023, this time is totally different! Pinky swear promise that this time is different unlike 2025, 2024, and 2023. This is usually coupled with the promise that AI is totally improving this time, you’ll see! While this mentality is straight up idiotic, the practical side benefit is that it gives people like me an endless supply of stories where AI burned people.

Today is certainly no different. This latest story has the added bonus hilarity of it being an antivaxxer nutcase getting burned on top of it all. A company let go this employee because the employee refused to get the vaccine. The employee cried foul and tried claiming that they had a medical exception. When that was rejected, the person in question claimed that they had a religious exemption. This by citing the obviously fake idea that vaccines were created by tissues by aborted babies. Pretty standard antivaxxer lunacy. Unsurprisingly, that got rejected as well.

So, what is a nutcase to do at this point? Why, use AI of course! AI is never wrong, after all! That went about as well as you’d expect. From HRD:

Davidson’s legal troubles deepened when she cited “AHRC v. Alberta (Aboriginal Affairs), 2011 ABQB 56” in her review submissions. The respondent’s counsel flagged the citation as fabricated, allegedly the result of using artificial intelligence.

Ahmed acknowledged in his decision: “I accept that there is no such authority as ‘AHRC v. Alberta (Aboriginal Affairs), 2011 ABQB 56’’cited and relied upon by the complainant.” Ahmed stated: “However, I am unable to make any specific finding or accept the respondent’s submissions regarding any perceived use of LLM by the complainant.”

He used the opportunity to remind parties about court notices regarding AI use, writing: “The extent of my reference to the use of LLM in these reasons is to remind any party appearing before the Tribunal to apprise themselves of the joint notice to the profession issued by our courts which applies to any party self representing or not.”

Whoopsies! None of this is particularly surprising, but in retrospect, it does make sense that an anti-vaxxer would go down that road. After all, the belief that AI is perfect in every way does require the rejection of established fact and persistent use of just plain belief that everything about their thinking is truth. Is it really that surprising that anti-vaxxer belief and AI perfection belief would go hand-in-hand? Probably not.

Still, I do get the opportunity to toss this additional case onto the pile that will promptly be ignored by the die hard true believers that AI is this magical technology that can do everything better than any human can do. This is especially hilarious given how much people believe that AI is going to simply take over the legal profession in general. This despite all the repeated failings of AI in the past.

Drew Wilson on Mastodon, Twitter and Facebook.


Discover more from Freezenet.ca

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top