Scientific journals are dealing with a flood of AI slop and with the release of Prism, the fear is that it’s only getting worse.
One of the long running themes we’ve been witnessing for the last while is the fact that generative AI is ruining pretty much everything. In fact, we’ve just been documenting all the ways generative AI is failing at everything.
At this point, we’re just compiling a list of examples of the many ways generative AI is failing. Those examples include lawyers getting in trouble for fake AI inserted citations in legal briefs, the CNET scandal, the Gannet Scandal, bad “journalism” predictions, fake news stories, more fake stories, Google recommending people eating rocks, the 15% success rate story, bad chess tactics, the Chicago Sun-Times scandal, a Canadian team submitting fake legal citations in their legal briefs, other attorneys submitting fake citation filled legal documents, the 91% failure rate story, AI deleting user data, the lawyer who got fined $10,000 over a bogus AI written legal brief, AI killing workplace productivity with workslop, AI having an 81% failure rate in summarizing news content, AI Overview giving out bad health advice, AI only being able to successfully complete 2.5% of commission work successfully at best, AI slowing software development down by 19%, AI hallucinating in even more court documents, and AI being bad at poker.
One of the things I’ve heard from people pushing AI is that while it is bad at producing content like art and documents, it is a boon for the scientific community, so there are practical uses of all of this. Even that argument isn’t necessarily entirely true. If there is dedicated AI for that stuff, maybe, but if we’re talking about generative AI, this is actually a major source of headaches for the scientific community. For scientific publications, there has been a sharp increase in generative AI slop being submitted for review. Given the slop in software development and in the legal community, retrospectively, this isn’t necessarily a surprise. If generative AI sucks at writing a legal document, journalism, and coding, why would scientific research be any different?
While things are bad now, there are fears that things could get much worse. OpenAI has apparently released a product called Prism. That is raising alarms within the scientific community. It’s not because there is a fear that scientific researchers would become an obsolete thing. In fact, it’s quite the opposite problem. The fear is that it could lead to even more AI slop that’s already flooding scientific publications. From ArsTechnica:
On Tuesday, OpenAI released a free AI-powered workspace for scientists. It’s called Prism, and it has drawn immediate skepticism from researchers who fear the tool will accelerate the already overwhelming flood of low-quality papers into scientific journals. The launch coincides with growing alarm among publishers about what many are calling “AI slop” in academic publishing.
To be clear, Prism is a writing and formatting tool, not a system for conducting research itself, though OpenAI’s broader pitch blurs that line.
Prism integrates OpenAI’s GPT-5.2 model into a LaTeX-based text editor (a standard used for typesetting documents), allowing researchers to draft papers, generate citations, create diagrams from whiteboard sketches, and collaborate with co-authors in real time. The tool is free for anyone with a ChatGPT account.
“I think 2026 will be for AI and science what 2025 was for AI in software engineering,” Kevin Weil, vice president of OpenAI for Science, told reporters at a press briefing attended by MIT Technology Review. He said that ChatGPT receives about 8.4 million messages per week on “hard science” topics, which he described as evidence that AI is transitioning from curiosity to core workflow for scientists.
The article does cover the flood of AI slop that already exists.
Those concerns are not hypothetical, as we have previously covered. A December 2025 study published in the journal Science found that researchers using large language models to write papers increased their output by 30 to 50 percent, depending on the field. But those AI-assisted papers performed worse in peer review. Papers with complex language written without AI assistance were most likely to be accepted by journals, while papers with complex language likely written by AI models were less likely to be accepted. Reviewers apparently recognized that sophisticated prose was masking weak science.
“It is a very widespread pattern across different fields of science,” Yian Yin, an information science professor at Cornell University and one of the study’s authors, told the Cornell Chronicle. “There’s a big shift in our current ecosystem that warrants a very serious look, especially for those who make decisions about what science we should support and fund.”
Another analysis of 41 million papers published between 1980 and 2025 found that while AI-using scientists receive more citations and publish more papers, the collective scope of scientific exploration appears to be narrowing. Lisa Messeri, a sociocultural anthropologist at Yale University, told Science magazine that these findings should set off “loud alarm bells” for the research community.
These problems are by no means anything new. We’ve witnessed in numerous other fields the problem of letting AI handle everything. At best, generative AI is error prone and filled with hallucinations. At worse, it slows progress down for people despite the claims of increasing productivity. This is a continuous theme across the different fields that generative AI has been used in.
At any rate, we get to, once again, add to the list of ways generative AI is failing at things. This is happening even as some of the AI pushers out there proclaim that AI is rapidly improving and fixing all the ways that it is screwing things up in the past. The evidence out there seems to continue to show that AI is most decidedly not improving after all.
Drew Wilson on Mastodon, Twitter and Facebook.
Discover more from Freezenet.ca
Subscribe to get the latest posts sent to your email.

