A brewing controversy at this year’s International Conference on Learning Representations (ICLR) is raising serious ethical questions about AI-generated academic papers. And their place in scientific peer review.
Three AI labs — Sakana, Intology, and Autoscience — recently claimed they submitted AI-generated studies to ICLR workshops. Some of those studies were accepted, igniting backlash across the academic community.
At ICLR, workshops manage their own review processes, often selecting studies for publication in the conference’s broader workshop track. According to ICLR organizers, only Sakana disclosed upfront that its submissions were AI-generated and secured consent from reviewers. In contrast, Intology and Autoscience failed to inform organizers or reviewers, triggering concerns over transparency and ethical violations.
The revelation sparked intense criticism from AI researchers and academics. They argued that using human peer review as an AI benchmark without consent exploits volunteer reviewers. And that undermines the integrity of scientific conferences.
Prithviraj Ammanabrolu, assistant computer science professor at UC San Diego, criticized the move in a widely shared post on X (formerly Twitter), accusing the AI labs of using peer-reviewed venues as free human evaluations without consent. He wrote, “This makes me lose respect for everyone involved, no matter how impressive the system is.”
Peer review is already a time-consuming and unpaid task. A Nature survey found that nearly 40% of academics spend two to four hours reviewing a single study — a workload that has grown as AI research submissions skyrocket. In 2023, NeurIPS, another leading AI conference, saw submissions jump 41% year-over-year, hitting 17,491 papers.
What infuriated many researchers is that Intology openly bragged about the positive reception their AI-generated papers received. In an X post, the company claimed its submissions earned “unanimously positive reviews” and that reviewers praised the AI’s “clever ideas.”
For many in the community, this wasn’t an achievement — it was a manipulation of the peer review system. Ashwinee Panda, a postdoctoral fellow at the University of Maryland, called it “disrespectful” to reviewers who unknowingly evaluated synthetic work.
“Sakana reached out asking if we would be willing to participate in their experiment, and we said no,” Panda shared. “Submitting AI-generated papers without permission is just bad practice.”
Sakana ultimately withdrew its AI-generated paper, admitting that two out of three submissions didn’t meet conference standards. That one contained “embarrassing” citation errors. The company said it pulled the paper out of transparency and respect for ICLR’s standards.
Growing Calls for AI Paper Regulation and Reviewer Compensation
The incident has sparked a broader debate over how scientific communities should handle AI-generated research. Whether human reviewers deserve compensation when AI labs use them as unpaid evaluation tools.
“Academia is not here to provide free evals for AI companies,” said Alexander Doria, co-founder of AI startup Pleias. He called for a regulated agency to review AI-generated studies — with researchers paid for their time.
The scandal also highlights an ongoing issue in academic publishing. In 2023 alone, up to 17% of AI conference papers contained synthetic text, according to one analysis. But the move by AI companies to actively use peer review for PR and product validation adds a new, unsettling layer.
ICLR organizers have yet to announce any formal policy changes, but the backlash signals a clear need for new rules and transparency standards around AI-generated scientific papers.
Without stricter guidelines, researchers warn the peer review system could become overwhelmed, its credibility eroded by companies treating it as free, unconsented evaluation for AI models.
For now, the debate rages on — underscoring the growing friction between AI development’s breakneck pace and academia’s slower, trust-based systems designed for human research.