Chicago Sun-Times Reading List Becomes Latest AI Slop Incident

Another high profile incident involving some bonehead leaving it to AI to do all the work. this time, it’s the Sun-Times reading list.

It just keeps happening. Someone in some major position decides to just let AI (Artificial Intelligence) do all the work only to get burned when they discover that generative AI models suck. Throw in that lazy aspect of not bothering to check the work and you get to see all these fun incidences of what happens when idiots leave their jobs entirely to AI.

So, how does this keep happening? Well, simply put, there is a cult-like mentality that generative AI has been fully perfected and completely infallible. If you ask it to write something, not only will it sound great, but it will be totally accurate in everything it writes. So, if you let AI write a, for instance, news article, there would be no errors and you can just go about your day without having to put in an ounce of additional effort at all.

This can be sourced to all the media companies and AI developers hyping the living daylights out of AI. This includes people going to the extremes of saying that AI is going to make humans obsolete at a huge number of jobs. Other kinds of hype we heard also include how humanity could be at risk of going extinct by the end of 2024 (LOL!), artists are going to become worthless, AI must be packed into everything, AI is going to be what sparks the next industrial revolution, and so on and so forth.

The reality is very much different. AI is little more than over hyped fluff meant to skim some money out of those foolish enough to believe whatever the latest hype train is. Sadly, the hype being pushed has been disturbingly effective as lawmakers from around the world regularly host summits to talk about AI regulation and how best to work with the technology moving forward.

All text based generative AI really is is a glorified auto-complete. It’s good at generating text that sounds like it was written by a human being, but that’s pretty much it. Is the text accurate and related to facts? That’s not the job of AI. All AI really concerns itself with is whether or not it’s output is realistic to the average human eye. At that point, the job is done as far as the AI is concerned. When it doesn’t know something or the facts don’t line up with what it is trying to present, it just makes stuff up as it goes along – hence the problem of hallucinations.

Yet, the reality just doesn’t sink in for some people. They believe AI is totally and legitimately perfect in every way. It’s how you get incidences like lawyers getting busted for using AI to create fake legal citations in court documents, the CNET scandal, the Gannet scandal, bad AI predictions, fake stories based on poorly parsed social media posts, fake stories of high profile cases, recommendations that users eat rocks, 15% success rates on problem solving, and garbage chess strategies.

Now, you would think that after years of incidences where people leave their work to AI only to get completely burned by it would send the message that AI sucks if you let it do all the work. People really ought to learn after all of that that AI is bad at their job. Sadly, people still seem to fall for the AI hype and just let AI do all the work for them and you continue to get incidences where this burns the person in the end. That’s exactly what happened in the now infamous case revolving around the Chicago Sun-Times “Summer Guide”.

Essentially, some bozo decided to let AI create a reading list of books. The problem, naturally, is that the AI just made the books up for the most part. Naturally, once the list was generated, the person behind it decided to not bother checking anything over and hit “publish”. It is, after all, an exercise in extreme stupidity and the consequences of that action quickly became apparent. From TechDirt:

The latest scandal comes courtesy of the Chicago Sun Times, which was busted this week for running a “summer reading list” advertorial section filled with books that simply… don’t exist. As our friends at 404 Media note, the company somehow missed the fact that the AI synopsis was churning out titles (sometimes by real authors) that were never actually written.

Such as the nonexistent Tidewater by Isabel Allende, described by the AI as a “multigenerational saga set in a coastal town where magical realism meets environmental activism.” Or the nonexistent The Last Algorithm by Andy Weir, “another science-driven thriller” by the author of The Martian, which readers were (falsely) informed follows “a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years.”

Unlike some past scandals, one (human) Sun-Times employee was at least quick to take ownership of the fuck up:

“The article is not bylined but was written by Marco Buscaglia, whose name is on most of the other articles in the 64-page section. Buscaglia told 404 Media via email and on the phone that the list was AI-generated. “I do use AI for background at times but always check out the material first. This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses,” he said. “On me 100 percent and I’m completely embarrassed.”

Buscaglia added “it’s a complete mistake on my part.”

“I assume I’ll be getting calls all day. I already am,” he said. “This is just idiotic of me, really embarrassed. When I found it [online], it was almost surreal to see.”

Initially, the paper told Bluesky users it wasn’t really sure how any of this happened, which isn’t a great look any way you slice it

Later on, the paper issued an apology that was a notable improvement over past scandals. Usually, when media outlets are caught using half-cooked AI to generate engagement garbage, they throw a third party vendor under the bus, take a short hiatus to whatever dodgy implementation they were doing, then in about three to six months just return to doing the same sort of thing.

Indeed, the response from the paper is a marked improvement over other incidences involving other parties where it’s just a game of finger pointing. Then, the lesson the company ends up taking is that it just needs to double down on AI slop afterwards and just be more careful to not get caught in the future (the wrong response, really). So, the fact that someone admitted to the mistake is an improvement even though that’s a pretty low bar to get over.

Of course, this insert was also added to a number of other publications as well. It’s not just tied to one particular paper. As a result, plenty of people got to see the AI slop screw up in all of its glory.

One remarkable thing I found was that some outlets were shocked at the prospect that AI can “hallucinate” (AKA, just make up some random BS to make the text sound authentic). In a CBC broadcast yesterday, reporters expressed shock that this is an actual thing. One said that they had never heard of such a thing until that day. I mean, it has only been going on for years now and if you are into the news surrounding AI (as is the case with a lot of publications these days trying to out hype each other about it all), then this shouldn’t even really be news at all, let alone a shocking revelation. It’s that kind of reaction that suggests that media companies are still just taking press releases from large companies and just taking them at face value without any context. This is, after all, why responsible outlets like Freezenet actually does digging and research instead of just letting ourselves get sucked into whatever fad happens to be going around at the time. Of course, you do you, CBC. I’ll just be over hear publishing reports based on accurate information.

At any rate, since some people just can’t seem to wrap their head around the idea that AI might not actually be a perfect technology, I strongly suspect this will be far from the last time we see incidences like this. There will likely always be some moron out there that decides that maybe they can leave their writing assignment to AI only to get busted for it in a very high profile manner (just like this recent case). They’ll believe that AI is perfect, then act all shocked when they find out that the slop that was produced was filled with made up garbage. We’ll continue to point these out while pointing and laughing at them after too if we have a moment to spare from whatever other insanity is going on at that moment.

Drew Wilson on Mastodon, Twitter and Facebook.


Discover more from Freezenet.ca

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top