AI was put to the test in the wonderful world of heads up poker. Apparently, they aren’t actually all that good at it.
One of the fun things people like us get to see sometimes is things playing out like clockwork. We sit here, find the patterns, and become downright prophetic with our analysis. Right now, AI companies are seeing some pretty troubling (for them) indicators that things aren’t going so well. We’ve seen a $100 billion investment stall, OpenAI internally admitting that they are going end up losing $14 billion this year alone, and companies in general struggling just to give away their AI products for free, let alone rake in huge amounts of money from premium subscriptions. This along with the usual signs that we are in an AI investment bubble right now that is becoming overdue for a good bursting.
So, in response to all of this, one of the things we predicted would happen is that these companies would turn to the one tried and true method of increasing the value of their companies and making the line on the chart go up: hype. Taking advantage of the gullible mainstream media, they went running to the mainstream media to tell their tall tales and the media, not learning from the last three years of failed promises, ate it all up. That’s why you see headlines like “As software stocks slump, investors debate AI’s existential threat“, “Your phone edits all your photos with AI – is it changing your view of reality?“, or even “U.S. software stocks hit by Anthropic wake-up call on AI disruption“. So, pretty standard stuff from the mainstream media where the mainstream media is just really really stupid when it comes to AI related stuff. It’s not really surprising given that mainstream media stupidity on this topic has been happening for at least three years now, so why finally start rubbing two brain cells together now when there is so much more stupidity to express.
Either way, the goal is plain simple for these companies: make line on chart go up. What better way to do that than to tell a few more lies to the mainstream media and have them run around the office screaming about the AI take over a few more times? That’ll get the line on the chart to go up. It did work in the past, so why wouldn’t it work now? As we have shown, at least the media is still believing the con job and continuing to win “fell for it again” awards. Oh, speaking of which, for Reuters, BBC, and CTV, here is your award.
Of course, as readers of Freezenet already know, AI is just pure garbage. I keep adding to the list of AI fails regularly and, these days, there is no shortage of that given how bad AI continues to be. Examples of this include lawyers getting in trouble for fake AI inserted citations in legal briefs, the CNET scandal, the Gannet Scandal, bad “journalism” predictions, fake news stories, more fake stories, Google recommending people eating rocks, the 15% success rate story, bad chess tactics, the Chicago Sun-Times scandal, a Canadian team submitting fake legal citations in their legal briefs, other attorneys submitting fake citation filled legal documents, the 91% failure rate story, AI deleting user data, the lawyer who got fined $10,000 over a bogus AI written legal brief, AI killing workplace productivity with workslop, AI having an 81% failure rate in summarizing news content, AI Overview giving out bad health advice, AI only being able to successfully complete 2.5% of commission work successfully at best, AI slowing software development down by 19%, and AI hallucinating in even more court documents.
I can definitely say that copying and pasting this list and adding onto it just doesn’t seem to get old. In fact, the larger the list gets, the more enjoyable it becomes. Today, I get to add to that list. Apparently, a bunch of AI LLMs (Artificial Intelligence Large Language Modules) were put to the test in a heads up poker tournament. Now, the fun thing about poker is that math is definitely beneficial to increase your chances of success. Sure, it isn’t the be all and end all because you still have luck and the human factor messing with things, but math, without a doubt, certainly helps.
So, knowing this, you would think that AI might actually do well here. A scenario where math is a big component to your success and a machine sitting there crunching the numbers. At least AI might have a chance here, right? Well, if you think that, then, apparently, you over-estimate just how good AI is (it sucks). Even in poker, AI really really sucks at it.
The tournament was in a bracket formation with the following modules competing: 03, Deepseek 3.2, Grok 4, Gemini 3 Flash, GPT 5.2, Gemini 3 Pro, Opus 4.5, and Sonnet 4.5. To help break things down, well known poker pro, Doug Polk, broke things down in three separate videos. For those who want to view the analysis yourselves, here is video number 1:
… and video number 3
I can admit that I’m still watching this hour and a half of footage, but there are things that are jumping out to me. An absolute basic thing to understand about poker is, uh, knowing what the board it in relation to your hand.
For instance, if you make it to the turn and the board is a total rainbow, are you talking about the possibility of a nut flush draw? Obviously not, but there are moments where the AI tends to think so.
If all you have is, say, queen high post flop, are you talking about having a strong middle pair? No, but in the world of AI, that is apparently a possibility.
Also, if you see a river, are you still talking about having an open ended straight draw? Obviously not, but in the world of AI, that is totally a thing.
Throughout the videos, Polk does show off what the bots were thinking and the analysis that they provide can be downright hilarious. It really is that bad. There’s contradictory analysis, analysis that makes absolutely no sense, and strategy that is just plain baffling. What’s more, for the most part, post-hand statistical analysis is frequently nonsensical.
As you can tell by the list of AI fails I keep posting, I see this everywhere. People assume AI is this magical technology that can do everything better than any human can do and when it gets put to the test, hilarity ends up ensuing. AI sucks at chess, legal documents, news articles, spreadsheets, software development, and so much more. Should it really be surprising that AI also sucks at poker as well? Probably not. This is just one more example to throw onto the pile.
Drew Wilson on Mastodon, Twitter and Facebook.
Discover more from Freezenet.ca
Subscribe to get the latest posts sent to your email.


