Researchers Unleash AI into Crypto Investing. Hilarity Ensues

If you ever wondered what it is like to put AI in charge of investing, researchers are now starting to answer that question. The Initial findings are not good.

The ongoing myth about generative AI is that it can handle everything better than humans. A growing body of evidence is showing otherwise.

To recap that increasingly long body of evidence, we’ve seen examples of AI not performing all that well in a number of scenarios. Those examples include lawyers getting in trouble for fake AI inserted citations in legal briefs, the CNET scandal, the Gannet Scandal, bad “journalism” predictions, fake news stories, more fake stories, Google recommending people eating rocks, the 15% success rate story, bad chess tactics, the Chicago Sun-Times scandal, a Canadian team submitting fake legal citations in their legal briefs, other attorneys submitting fake citation filled legal documents, the 91% failure rate story, AI deleting user data, the lawyer who got fined $10,000 over a bogus AI written legal brief, AI killing workplace productivity with workslop, AI having an 81% error rate summarizing news content, and AI having a success rate of 3% for freelance work, and AI being unable to handle a simple inventory form.

Yet, despite all the evidence I have compiled, I still get people telling me that I know nothing about AI and that AI is practically perfect technology. It’s basically cult mentality where all evidence to the contrary of their personal beliefs is either completely disregarded or just a minor inconvenience. No amount of evidence will convince some people that AI is actually hot garbage because that would contradict their personal beliefs. It’s insane, but that is the reality we live in today.

One idea I’ve seen bandied about is to take the mythical perfect AI technology and unleash it into the stock markets. Make AI become an investor and it’ll probably outperform silly humans at this game. In the process, the AI would save money because it doesn’t necessarily need to be paid for all of this. This is sometimes talked about on certain social media platforms, but other times, this concept pops up on obvious scam ads.

No doubt some people think about the concept and wonder what could possibly go wrong about such an idea. Well, as it turns out, researchers were at least wondering about the idea and decided to put this concept to the test. Essentially, they took 6 major AI models and gave each $10,000 in seed money. They then unleashed them into the crypto stock markets to see how well they would do. The results? Apparently, they didn’t do so well. From Reuters:

Turn an artificial-intelligence bot into a trader and it acts all too human. Over a two-week span, six frontier models were seeded with $10,000 apiece to trade digital-coin derivatives. Five finished deep in the red, while the last barely scraped by despite its flimsy risk-adjusted score. For the time being, primate fund managers can rest easy knowing machines simply ape their worst tendencies.

The Alpha Arena experiment, opens new tab conducted by U.S. startup Nof1 took place on Hyperliquid, a crypto-derivatives venue that allowed the bots to buy and sell perpetual futures using real money. Each generated its own thesis, sized risk and executed transactions around the clock, logging its reasoning in real time. Discipline and dispassion were in short supply.

Instead, the books read more like a turbocharged Reddit board. Outcomes ranged from Qwen’s $652 loss, excluding a modestly successful open trade at the end of the trial, to GPT-5’s $5,679 loss. Claude, DeepSeek, Gemini, and Grok frittered away a third to half their stakes. Only about one in four trades made money, and a simple risk-adjusted return measure sat below zero. The bots didn’t just recklessly lose money, but paid handsomely for the privilege: roughly 10% of the funds went to fees.

The similarities to mere mortals were striking. Large-language models chased trends and ignored basic risk discipline. Qwen’s winning bitcoin trade borrowed $19 for every $1 it put in. None leveraged itself less than 10 times. Rather than map the market, the machines exhibited adrenaline-junkie behavior, racking up 628 trades over the short span.

Academic research confirms the patterns. A recent survey, opens new tab of more than 80 studies found that AI models help process information, but struggle once real-world frictions enter the frame. Machine-learning strategies that look strong in controlled environments collapse, opens new tab when tested over longer periods or wider markets, another research paper concluded.

The results are both hilarious and not surprising. People think that AI can do anything and everything, yet for every task that is thrown at it, it has a habit of failing spectacularly. The fact that it couldn’t handle stock market trading really only adds to the growing list of things it can’t do. While some people might argue that it’s only going to improve from here on out, this is an excuse I’ve been hearing since at least 2023. It hasn’t gotten better since then.

Drew Wilson on Mastodon, Twitter and Facebook.


Discover more from Freezenet.ca

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top