Home Science AI researchers are coming: What will happen to science?

AI researchers are coming: What will happen to science?

4
0

Scientific research can be a long and tedious process. And a frustrating one, too: Just like time, funding is limited.

A solution to make science more efficient? AI colleagues — potentially, at least.

The first automated AI researcher

In 2024, the Tokyo-based start-up Sakana.ai introduced “The AI scientist” — an AI system that can create new machine learning research from scratch, completely autonomously and for as little as $15 (€13) per paper.

The model can go through the entire research process without any human support: From creating new hypotheses, to running code, to writing up the results.

And it goes even further than that: It has its own peer-review system that automatically evaluates the paper’s quality, ensuring it meets scientific standards.

When an independent team of researchers tested the 2024 version of the system, the quality of its results seemed to be rather low. While it indeed could go through the entire research life cycle by itself, the result was — as the authors put it — like that of “an unmotivated undergraduate student rushing to meet a deadline.”

The most concerning issues were incomplete sections, outdated or limited references, and incorrect — or even fabricated — numerical results, often referred to as “hallucinations” in AI terms.

Still, the researchers saw promise in the system, especially due to its efficiency. What would have taken that “unmotivated undergraduate student” at least 20 hours, they estimated, the AI peer could do in 3.5 hours. And that for an average cost of $6 to $15.

AI paper gets accepted at conference workshop

Now, one and a half years later, Sakana.ai has put their newest version of the system to the test: 3 AI-generated papers — along with 40 human-created ones — were submitted to be peer reviewedat a workshop of a top-tier machine learning conference.

The reviewers knew that some of the submissions were produced by AI, but not which ones.

About 70% of submitted papers got through the first round. While two of the AI-generated papers didn’t make the cut, one of them did, meaning it met the scientific standards of the conference workshop.

These standards are much lower than those of the main conference.

“The study that was accepted there isn’t seen by all scientists as an actual peer-reviewed article,” Jakob Macke, professor for Machine Learning and Science, told the Science Media Center Germany.

And the latest version of the “AI scientist” still has its flaws: Underdeveloped ideas, structural issues, and many types of hallucinations.

But as its own evaluation system shows, the papers seem to steadily increase in quality over time, which means a future with AI scientists doesn’t seem so far away.

AI could solve science’s inefficiency problem

AI systems are tireless. They can read through research papers in a matter of seconds, don’t complain about working overtime, and don’t need to be paid — or at least cost considerably less than a human researcher would.

This could mean more results in less time. And with that, a much more efficient process to scientific discovery.

AI analyzes MRI scans to reveal our biological age

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

However, the question is in which direction these discoveries would lead us.

When a human conducts research, the end product is a result of dozens, if not hundreds, of small decisions. No researcher would ever approach a research topic the exact same way.

These decisions, when made by AI, disappear behind what is considered a “superhuman” system — one that might be deemed smarter, faster and more objective than us.

AI researchers come with risks

So, what if we relied on AI, instead of our own diverse minds?

Anthropologist Lisa Messeri and neuroscientist Molly Crockett expect what they call a “monoculture of science.”

In agriculture, monoculture is the practice of growing only one type of crop — instead of multiple — over a period of time. This usually means higher profits. But at the same time, it increases the risk that the crops will fall victim to pests and diseases.

A similar thing might happen when we have AI systems do science for us. The kind of research AI initiates might only be the type that suits AI best, at the expense of projects that require more context and nuance — the “human touch.”

This might not only reduce the scope of science, but also risk systematic errors, awareness of which disappears once humans put themselves out of the picture.

“The greatest risk is placing too much trust in AI-generated results. The key countermeasure becomes the human ability to think critically,” Iryna Gurevych, professor for Ubiquitous Knowledge Processing, told the Science Media Center Germany.

Without critical thinking, we might objectively produce more but end up understanding less.

With AI, we not only have much to gain, but also a lot to lose.

The question remains: In science, does human inefficiency offer nothing of value?