Peer review is the central filter of scientific publication. Independent experts evaluate the rigor, originality, and relevance of each manuscript before it is accepted. However, a scientific article written by an AI has just passed this filter without being identified as such, raising questions about the future of academic research.
The AI Scientist, a system that handles all stages of research
The company Sakana AI, in collaboration with the University of British Columbia and the University of Oxford, has developed a system called The AI Scientist. This system does not just write text. It formulates research hypotheses, writes the necessary computer code, conducts experiments, analyzes results, produces graphics, and writes the entire manuscript.
To test its capabilities, the team submitted three articles generated by this AI to a workshop at the ICLR 2025 conference, one of the most significant events in the field of machine learning. The experience was approved by an ethics committee at the University of British Columbia and took place with the conference organizers’ agreement.
According to the study published in Nature, the reviewers awarded one of the three articles scores of 6, 7, and 6, averaging 6.33, enough for acceptance. This score placed this article above 55% of articles submitted by humans to the same workshop.
A scientific article written by AI, but riddled with flaws
Despite this apparent success, the system has notable weaknesses. As reported by Phys.org, the AI produces hallucinations, meaning invented information presented as true. Among the observed issues are fictitious bibliographic citations and duplicated figures without justification.
Moreover, the acceptance rate of the targeted workshop fluctuated between 60 and 70%, a much less selective threshold than that of major conferences, which is around 20-30%. In other words, the bar cleared by the AI does not match the highest level of scientific community requirements.
However, the experiment shows that human reviewers did not identify the artificial origin of the manuscript. This observation questions the current peer review system’s ability to distinguish between human and automated work.
Towards an inflation of automated publications
The arrival of these tools raises legitimate concerns in the academic world. If an AI can produce acceptable articles in a few hours, there is nothing stopping it from generating hundreds of simultaneous submissions. Overwhelmed review committees may be inundated with artificial manuscripts.
In parallel, this technology could lead to an inflation of publications. In a system where the number of published articles impacts careers, the temptation to use AI to increase submissions is evident. Some observers also fear that entities may conduct unethical experiments, impossible to supervise humanly, and then publish them automatically.
However, other voices highlight the positive potential of automation. By relieving researchers of the most repetitive tasks, they could focus on creativity and interpretation. The central question is not whether AI will write research articles, but how the scientific community will adapt its safeguards to absorb its impact.




