Tech News

Desperate Analysis Shows AI Is Destroying the Credibility of Science Publishing

It is almost impossible to overstate the importance and influence of arXiv, a scientific archive that, for a while, single-handedly justified the existence of the Internet. ArXiv (pronounced “archive” or “Arr-ex-eye-vee” depending on who you ask) is a repository where, since 1991, scientists and researchers have announced “hey I just wrote this” to the entire scientific world. Peer review is fine, but necessary. ArXiv just needs a quick one-off review from a moderator instead of a painstaking review, so it adds a simple middle step between discovery and peer review, where all the latest discoveries and innovations can be treated—carefully—with the urgency they deserve sooner or later.

But the use of AI has hurt ArXiv and is bleeding. And it is not clear that the bleeding can ever be stopped.

As a recent story in The Atlantic notes, ArXiv creator and Cornell information science professor Paul Ginsparg has been concerned since the emergence of ChatGPT that AI could be used to break the small but necessary barriers that prevent the publication of garbage on ArXiv. Last year, Ginsparg collaborated on an analysis piece looking at the potential for AI in the arXiv submission. Rather alarmingly, apparently the scientists who used LLMs to produce papers that looked logical were more expensive than those who did not use AI. The number of papers from posters of AI-authored or augmented work was 33 percent higher.

AI can be legitimately used, the analysis says, for things like crossing the language barrier. It continues:

“However, traditional indicators of scientific quality such as language difficulty are becoming unreliable indicators of merit, just as we are facing an increase in the value of scientific work. As AI systems develop, they will challenge our basic assumptions about the quality of research, scholarly communication, and the nature of intellectual work.”

It’s not just ArXiv. It’s a tough time overall for the credibility of scholarship in general. An amazing owner published last week in Nature described the AI ​​error of a sarcastic scientist working in Germany named Marcel Bucher, who was using ChatGPT for e-mails, course information, lectures, and tests. As if that wasn’t bad enough, ChatGPT was also helping him analyze responses from students and was being incorporated into the interactive parts of his teaching. Then one day, Bucher tried “for a while” to disable what he called “data consent”, and when ChatGPT suddenly removed all the data he had stored specifically for the application—ie: on OpenAI servers—he lamented on the pages of Nature that “two years of carefully planned academic work disappeared.”

The pervasive, AI-induced laziness shown in the very place where rigor and attention to detail is expected and thought to cause despair. It was safe to assume that there was a problem when the number of publications increased a few months after the first release of ChatGPT, but now, as the Atlantic says, we are starting to get information about the true essence and scale of that problem—not so much people like Bucher, who are managed by AI who are facing the anxiety of publishing or perishing and hastening to release a fake paper of industrial scale.

For example, in cancer research, bad actors can request boring papers that they say document “the interaction between a tumor cell and one of the many thousands of proteins that exist,” notes The Atlantic. If the paper says it’s big, it will raise eyebrows, meaning the trick might be noticed, but if the fake conclusion of a fake cancer test is ho-hum, that loop will be seen as published—even in a reputable publication. It’s better if it comes with AI-generated images of gel electrophoresis blobs that are also boring, but add a little more flair at a glance.

In short, a flood of slop has arrived in science, and everyone should slow down, from busy academics planning their studies, to peer reviewers and ArXiv moderators. Otherwise, the archives that were among the few reliable sources of information left will be overwhelmed by the disease that has already infected them—perhaps irreversibly. And does 2026 feel like a time when anyone, anywhere, is slowing down?

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button