Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Sakana’s AI Challenges Scientific Research Norms

Sakana’s AI Challenges Scientific Research Norms Sakana’s AI Challenges Scientific Research Norms
IMAGE CREDITS: LUX CAPITAL

The debate over AI’s role in scientific research is intensifying, with Japanese AI startup Sakana making a bold claim: its AI system, The AI Scientist-v2, successfully generated a peer-reviewed scientific paper. While this milestone is notable, experts caution that there are significant caveats.

AI’s ability to contribute meaningfully to science remains a hotly debated topic. While some researchers see potential, others argue that AI isn’t yet capable of truly advancing scientific research and discovery. Sakana sits somewhere in the middle of this debate.

The company collaborated with researchers from the University of British Columbia and the University of Oxford to submit three AI-generated papers to a workshop at the International Conference on Learning Representations (ICLR), a well-respected AI conference. These papers were created entirely by AI, covering everything from hypotheses and experiments to data analysis and visualizations.

According to Sakana, this was part of an experiment agreed upon by ICLR’s leadership to assess AI-generated research through a double-blind peer review. Of the three submitted papers, one was accepted, a study critiquing existing AI training techniques. However, Sakana voluntarily withdrew the paper before publication, citing transparency and respect for the peer review process.

While an AI-generated paper passing peer review sounds groundbreaking, several factors diminish its impact. In its blog post, Sakana acknowledged that its AI system made citation errors, such as misattributing a method to a 2016 paper instead of the original 1997 study.

Additionally, since the paper was withdrawn after initial review, it never underwent a meta-review, a process where organizers could have further scrutinized and possibly rejected it. Furthermore, workshop acceptance rates tend to be much higher than those for the main conference track. Sakana admitted that none of its AI-generated studies met the standard for ICLR’s main publication track.

Experts Weigh In on the AI’s Role in Scientific Research

AI researcher Matthew Guzdial from the University of Alberta believes Sakana’s claim is somewhat misleading. He points out that human judgment played a key role in selecting which AI-generated papers to submit.

“What this really shows is that humans plus AI can be effective, not that AI alone can create scientific progress,” Guzdial said.

Meanwhile, Mike Cook, a research fellow at King’s College London, questions the rigor of the workshop’s review process.

“New workshops are often reviewed by more junior researchers,” Cook explained. He also noted that the workshop focused on negative results and challenges, which could make it easier for an AI to generate content that fits the theme convincingly.

Cook emphasized that AI has long been able to write human-like text, so passing peer review isn’t necessarily a breakthrough. The bigger concern, he says, is ensuring that AI contributes meaningful knowledge rather than just producing convincing but superficial Scientific research.

Sakana acknowledges that its AI isn’t producing groundbreaking discoveries but sees this experiment as an important step toward establishing norms for AI-generated research. The company stresses that scientific work should be judged on merit, not bias against AI authorship.

In its blog post, Sakana warned that if AI’s sole purpose becomes passing peer review, it could undermine the credibility of the entire research process. The company says it will continue working with the research community to ensure AI plays a constructive role in advancing knowledge rather than simply generating content that mimics scientific rigor.

As AI continues to evolve, the scientific community faces urgent questions: How should AI-generated research be evaluated? What safeguards are needed to prevent the dilution of scientific literature? And ultimately, can AI ever be a true co-scientist—or is it just a tool for refining human ideas?

For now, Sakana’s experiment provides a valuable data point—but not definitive proof that AI is ready to lead the future of scientific discovery.

Share with others