51吃瓜

Researchers ‘polarised’ over use of AI in peer review

Views becoming more entrenched on both sides, with roughly equal numbers positive and negative about new technologies, poll finds

Published on
九月 15, 2025
Last updated
九月 15, 2025
Source: iStock/Dmytro Chernykov

Researchers appear to be becoming more divided over whether generative artificial intelligence should be used in peer review, with a survey showing entrenched views on either side.

A found that there has been a big increase in the number of scholars who are positive about the potential impact of new technologies on the process, which is often criticised for being slow and overly burdensome for those involved.

A total of 41 per cent of respondents now see the benefits of AI, up from 12 per cent from a similar survey carried out last year. But this is almost equal to the proportion with negative opinions which stands at 37 per cent after a 2 per cent year-on-year increase.

This leaves?only 22 per cent of researchers neutral or unsure about the issue, down from 36 per cent, which IOP said indicates a “growing polarisation in views” as AI use becomes more commonplace.

Women tended to have more negative views about the impact of AI compared with men while junior researchers tended to have a more positive view than their more senior colleagues.

Nearly a third (32 per cent) of those surveyed say they already used AI tools to support them with peer reviews in some form.

Half of these say they apply it in more than one way with the most common use being to assist with editing grammar and improving the flow of text.

A minority used it in more questionable ways such as the 13 per cent who asked the AI to summarise an article they were reviewing – despite confidentiality and data privacy concerns – and the 2 per cent who admitted to uploading an entire manuscript into a chatbot so it could generate a review on their behalf.

IOP – which currently does not allow AI use in peer reviews – said the survey showed a growing recognition that the technology has the potential to “support, rather than replace, the peer review process”.

But publishers must fund ways to “reconcile” the two opposing viewpoints, the publisher added.

A solution could be developing tools that can operate within peer review software, it said, which could support reviewers without positing security or integrity risks.

Publishers should also be more explicit and transparent about why chatbots “are not suitable tools for fully authoring peer review reports”, IOP said.

“These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing. As the technology continues to evolve, so too must the frameworks that support ethical and trustworthy peer review,” Laura Feetham-Walker, reviewer engagement manager at IOP and lead author of the study, said.

tom.williams@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please
or
to read this article.

Reader's comments (2)

In the end a sane position is for publishers to be transparent on their use of AI then those who are happy to have it used can opt in;.Those of us who are not can take our manuscripts elsewhere.
new
That may be a solution for authors, but what about those who read the articles which have been waved through by AI peer review? Will it be flagged clearly at the top of the published version? Will journals which do their selection by peer review be penalised in journal ratings? Then it gets murkier. Will AI peer review be able to spot articles written by AI? Will it admit it? And if the journal is itself using AI for peer review, why would it even want to flag or reject articles written by AI?
ADVERTISEMENT