Sakana’s AI Paper Passes Peer Review

Sakana's AI
Sakana's AI

Sakana, a Japanese AI startup, recently announced that its AI system, AI Scientist-v2, generated a scientific paper that passed peer review. This marks one of the first instances of an AI-produced research paper being accepted for publication. The AI-generated paper was submitted to a workshop at the International Conference on Learning Representations (ICLR), a respected AI conference.

Sakana collaborated with researchers from the University of British Columbia and the University of Oxford to submit a total of three AI-generated papers. These papers were entirely created by the AI, including hypotheses, experiments, data analyses, visualizations, text, and titles. One of the AI-generated papers was accepted to the workshop.

However, Sakana withdrew the paper before publication to maintain transparency and adhere to ICLR conventions. The accepted paper proposed a new method for training neural networks and highlighted ongoing empirical challenges in the field. Despite this achievement, there are important caveats to consider.

Sakana acknowledged that its AI occasionally made citation errors, such as attributing methods to incorrect sources. Additionally, the paper did not undergo the more rigorous scrutiny typical of main conference track publications, as Sakana withdrew it before the final “meta-review.”

It is also worth noting that conference workshop acceptance rates are generally higher than those of main conference tracks. Sakana candidly addressed this point in its blog post, mentioning that none of its AI-generated studies met the internal standards required for main conference track publication at ICLR.

Ai-generated research acceptance scrutiny

Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, pointed out that Sakana’s results might be somewhat misleading. He noted that humans were involved in selecting the most promising papers from a larger pool of AI-generated content, demonstrating that the combination of human judgment and AI can be effective.

See also  Walmart offering smartwatch for just $22

However, this does not necessarily prove that AI alone can independently drive scientific progress. Mike Cook, a research fellow at King’s College London, raised questions about the rigor of the peer review process and highlighted the specific context of the workshop, which focuses on negative results and difficulties. He suggested that AI, known for generating human-like prose, might find it easier to convincingly write about failures.

Cook also emphasized broader concerns about AI’s role in scientific literature. While AI may pass peer review by producing persuasive content, it does not necessarily contribute to meaningful scientific advances. These issues underscore the importance of distinguishing between AI’s ability to generate seemingly valid research and its capacity to produce impactful scientific knowledge.

Sakana does not claim that its AI can create groundbreaking scientific work. Instead, the experiment aimed to examine the quality of AI-generated research and highlight the need for clear norms regarding AI-generated science. The company stressed the importance of judging AI-generated research on its own merits to avoid biases and committed to ongoing dialogue with the research community to prevent undermining the scientific peer review process.

In conclusion, while Sakana’s experiment demonstrates potential and sparks important discussions, it also highlights the current limitations and necessary caution in integrating AI into scientific research.

Photo by; Alex P on Pexels

More Stories