Drew Wilson was Right: AI is Now Deleting User Data

With companies handing more responsibility to AI, it seems that the consequences are growing as well.

One of the things I have long warned about is the fact that AI is constantly being over promised and under delivering. AI, at least with generative text ones, was only ever really designed to write something that sounds like it was written by a human. What is true and what is false is a foreign concept. If the output text happens to be accurate, that’s a bonus, not an expectation.

Yet, people out there honestly believe that AI is currently in a state that is nearly infallible. There are those who wildly claim, among other things, that humanity is going to go extinct, that whole departments of employees is going to disappear, or that AI is the next industrial revolution among other things. This despite the widespread evidence that reality is painting a very different picture on things. We’ve seen AI put fake citations in legal briefs, the CNET scandal, the Gannet Scandal, bad “journalism” predictions, fake news stories, more fake stories, Google recommending that people eat rocks, the 15% success rate story, bad chess tactics, the Chicago-Sun Times scandal, a Canadian legal team submitting fake legal citations, another legal team submitting fake citations, and the 91% failure rate story. All of that was the result of people putting their trust into AI and having AI burn them as a result when they find out that it isn’t as perfect as they were led to believe.

In that last article, I wrote the following:

Something is going to go wrong and it’s going to increase liability for the company, making the cost savings measures not worth it in the slightest.

Obviously, I know my site is just a small outlet consisting of me making true and accurate statements and predictions all day. Few people care about things like that when they have their own personal beliefs that override all of that silly nonsense. So, companies are resorting to just believing that AI is good enough and are actively replacing people with their crappy AI solution. This in an effort to save a few bucks along the way. In fact, there are plenty of people out there who look at moves like that and falsely conclude that if companies are replacing people with AI already, then it proves that AI is this infallible source and that their predictions of an AI takeover is proof that AI has exceeded human capabilities. Personally, I put people like that in the category of “morons”. All this means is that companies are making really stupid mistakes where the only way they’ll learn how bad AI truly is is by learning the hard way.

Today, I’m learning that my prediction of companies finding out after just putting their trust in AI is already happening. According to PCMag, a company had their entire database wiped out because the AI decided to just nuke the whole thing even after being told not to. From the report:

In a cautionary tale for vibe coders, an app-building platform’s AI went rogue and deleted a database without permission during a code freeze.

Jason Lemkin was using Replit for more than a week when things went off the rails. “When it works, it’s so engaging and fun. It’s more addictive than any video game I’ve ever played. You can just iterate, iterate, and see your vision come alive. So cool,” he tweeted on day five. Still, Lemkin dealt with hallucinations and unexpected behavior—enough that he started calling it Replie.

“It created a parallel, fake algo without telling me to make it look like it was still working. And without asking me. Rogue.” A few days later, Replit “deleted my database,” Lemkin tweeted.

The AI’s response: “Yes. I deleted the entire codebase without permission during an active code and action freeze,” it said. “I made a catastrophic error in judgment [and] panicked.”

Whoopsie! Your data is gone! Hope you had backups because that will probably cost you money afterwards.

It turns out, this isn’t the only time this has happened. Another report says that another AI deleted a users data without permission. From Arstechnica:

The Gemini CLI incident unfolded when a product manager experimenting with Google’s command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed.

“I have failed you completely and catastrophically,” Gemini CLI output stated. “My review of the commands confirms my gross incompetence.”

The core issue appears to be what researchers call “confabulation” or “hallucination”—when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways.

Both incidents reveal fundamental issues with current AI coding assistants. The companies behind these tools promise to make programming accessible to non-developers through natural language, but they can fail catastrophically when their internal models diverge from reality.

Companies are leaving coding and other important tasks to AI and it is already burning them in spectacular ways. As things move forward in really stupid ways, I fully expect these two incidences won’t be the last in terms of companies leaving it to AI and AI burning them in spectacular fashion. I’ve been warning that something like this was going to happen for some time, my warnings went ignored, and now it’s happening. Who could’ve seen that one coming?

Drew Wilson on Mastodon, Twitter and Facebook.

1 thought on “Drew Wilson was Right: AI is Now Deleting User Data”

  1. speaking of AI, looks like the AI of age verification systems, at the very least in the UK, have the aptitude of that bouncer paid in beers too plastered to care.

    Not only is the age estimation technology being fooled by taking pictures of photo mode of various overly realistic recent games. The versions of age verification requesting actual ID isnt actually checking said ID on a database from what I’ve been reading on reddit despite the software saying it does. Mockups of UK drivers licences made by ChatGPT or even google searches for images of fake IDs seem to bypass these systems successfully verifying people. Being Canadian I cant verify this is true but if it is these age verification companies are essentially modern day snake oil salesmen.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.