Canadian Legal Council Uses AI to Write Their Legal Case. Finding Out Ensues

It’s happened again. This time in Canada. A lawyer decided to use AI to write their legal brief. The judge wasn’t having it.

There’s more hype than substance when it comes to the power of generative AI. Some of that hype includes how AI was achieving sentience, would mean humanity is probably going to go extinct by the end of 2024, or will replace anyone creating content at all any day now. The reality, of course, is much more mundane. Generative AI may be good at translating from one language to another or helping writers fix grammar mistakes, but it is otherwise a glorified auto-complete designed to write something that merely sounds like it was written by human beings, but little else. Facts and reality is just not something such Large Language Modules (LLMs) are really fully capable of grasping.

Yet, there is an endless parade of idiots and sales people who insist that it is. The result? Idiocy making the news where people enter the “finding out” phase after they realized that AI was a bad choice to do their work for them – this while not even bothering to check over the work afterwards to ensure that it meets all applicable standards expected from the output afterwards.

Over the last couple of years, we’ve documented this parade of stupidity and chain of people entering the “finding out” phase. There’s the law case in the US that involved fake legal citations, the CNET scandal, the Gannet scandal, the bad “journalism” predictions, fake news stories due to poorly parsed social media posts, fake stories of high profile cases, Google’s infamous ‘eat rocks’ recommendation, the 15% problem solving success rate study, garbage chess tactics, and, of course, the Chicago-Sun reading list fiasco.

As we noted in the last example above, we fully expect another moron to come along and make the exact same mistake of assuming generative AI can handle their writing assignment only to enter the “finding out” part afterwards. As a result, we have quite the well of content to work with. One week later, we were proven right.

Today, we are learning that a Canadian lawyer decided to do exactly what the above implies. They let generative AI write their legal filing, they clearly didn’t check their work afterwards, and a judge found out and was clearly not amused. The legal case in question is R. v. Chand, 2025 ONCJ 282 and the judges response to the filing with fake citations left little question as to what happened:

[1] Mr. Chand is charged with Aggravated Assault and related offences. The trial evidence is complete, and the defence and Crown have provided their final submissions in writing. Unfortunately, there are serious problems with the defence submissions.
[2] One of the cases cited appears to be fictitious. The court was unable to find any case at that citation. There was no case by that name with that content at any other citation.
[3] Several case citations led to unrelated civil cases. Some case names were potentially related to self-defence, but the citations were for completely different cases. Other citations led to the case named, but the case did not provide authority for the point cited. The errors are numerous and substantial.

[5] I appreciate that this case likely turns on findings of fact and credibility, not the legal points in the defence submissions. I also appreciate that the general test for self-defence does not appear to be at issue. The disagreement between the parties is primarily a factual dispute not a legal one. However, Mr. Chand is entitled to the benefit of full submissions on all aspects of the case. I find it necessary to order that Mr. Ross personally prepare a new set of defence submissions within the following guidelines:

• the paragraphs must be numbered;

• the pages must be numbered;

• case citations must include a pinpoint cite to the paragraph that illustrates the point being made;

• case citations must be checked and hyperlinked to CanLII or other site to ensure accuracy;

• generative AI or commercial legal software that uses GenAI must not be used for legal research for these submissions.
[6] Mr. Ross has done a good job presenting the defence in this case. I’m confident that he will be able to prepare proper submissions within these guidelines.

Congratulations to the legal defence council on that case, you have been added to the list of morons we’ve been tracking who decided to let AI do all their work for them only to learn afterwards what a horrible mistake that is! Don’t worry, though, you won’t be the last, either.

(Via @MGeist)

Drew Wilson on Mastodon, Twitter and Facebook.


Discover more from Freezenet.ca

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top