The Citation Death Spiral: How AI Slop is Breaking Academic Research

The Citation Death Spiral: How AI Slop is Breaking Academic Research

Do we actually care if a research paper is correct, or do we just care that it has a high citation count? Yes, we do, but the current academic incentive structure has turned the scientific method into a game of SEO for intellectuals. We’ve reached a point where the goal isn’t to discover something new, but to produce a document that looks enough like a paper to trick a tired reviewer and a naive algorithm.

The citation death spiral

  • AI is generating “slop” papers that mimic the structural markers of real research without containing actual insight.
  • Peer reviewers are using LLMs to summarize and critique these papers, creating a closed-loop system where no human ever actually reads the text.
  • Citation counts are inflating because AI-generated papers cite other AI-generated papers, creating a fake consensus.
  • The “paper mill” industry is now automating the production of fraudulent studies at a scale that makes manual verification impossible.

The real problem here isn’t that LLMs can write a convincing abstract. The problem is that the academic world has spent decades building a currency—the citation—that is far too easy to counterfeit. When you tie a professor’s salary and tenure to a number on a screen, you aren’t encouraging science; you’re encouraging a high-frequency trading strategy for prestige. It’s essentially a Ponzi scheme for PhDs, where the “value” of a researcher is based on how many other people claim to have read their work, regardless of whether they actually did. It is a system designed for a world where humans had to manually type out bibliographies, and it is completely broken in an era where a script can generate a thousand citations in three seconds.

As detailed in The Verge, this has evolved into a feedback loop of synthetic noise. The article highlights how “paper mills”—entities that churn out fraudulent research for a fee—are now using AI to automate the production of fake studies. This isn’t just a few bad actors; it’s an industrialization of academic fraud. We see this in the strange case of Peter Degen’s supervisor, who noticed a 2017 paper was being cited at an impossible rate. Why? Because the bots had decided it was a “safe” paper to cite to make other bot-written papers look legitimate. Who is actually reading this stuff? (Probably no one, unless they’re looking for a specific equation to copy-paste). The friction isn’t in the technology, but in the human ego. We’ve created a system where the volume of output is prized over the validity of the result.

The irony is that this “slop” is now leaking back into the training sets of the very models that created it. We are witnessing a digital version of the Hapsburg dynasty—incestuous data breeding a generation of models that are increasingly inbred and hallucinated. When a model is trained on a research paper that was written by a model, which was citing a paper that was also written by a model, the “truth” becomes a game of telephone played by GPUs. If we continue to let synthetic garbage pollute the primary record of human knowledge, we aren’t just making it harder to find the truth; we’re erasing the trail. This is like a chef trying to make a gourmet meal using ingredients that are actually just plastic replicas of food. It looks right on the plate, but it’s completely indigestible.

We cannot “detect” our way out of this. AI detectors are a joke, and the arms race between the generator and the detector is a waste of expensive H100 cycles. The only way to fix this is to kill the h-index as a primary metric of success. By Q4 2025, the prestige of the h-index will have plummeted as a reliable indicator of quality, forced out by a move toward smaller, verified human-only audit trails and open-source replication requirements. Until the incentives change, the journals will keep printing noise and the researchers will keep citing it because their mortgages depend on it.

Academia is currently a giant LLM hallucination with tenure.

Leave a Reply

Your email address will not be published. Required fields are marked *