Researchers appear to be becoming more divided over whether generative artificial intelligence should be used in peer review, with a survey showing entrenched views on either side.
A found that there has been a big increase in the number of scholars who are positive about the potential impact of new technologies on the process, which is often criticised for being slow and overly burdensome for those involved.
A total of 41 per cent of respondents now see the benefits of AI, up from 12 per cent from a similar survey carried out last year. But this is almost equal to the proportion with negative opinions which stands at 37 per cent after a 2 per cent year-on-year increase.
This leaves?only 22 per cent of researchers neutral or unsure about the issue, down from 36 per cent, which IOP said indicates a ※growing polarisation in views§ as AI use becomes more commonplace.
51勛圖
Women tended to have more negative views about the impact of AI compared with men while junior researchers tended to have a more positive view than their more senior colleagues.
Nearly a third (32 per cent) of those surveyed say they already used AI tools to support them with peer reviews in some form.
51勛圖
Half of these say they apply it in more than one way with the most common use being to assist with editing grammar and improving the flow of text.
A minority used it in more questionable ways such as the 13 per cent who asked the AI to summarise an article they were reviewing 每 despite confidentiality and data privacy concerns 每 and the 2 per cent who admitted to uploading an entire manuscript into a chatbot so it could generate a review on their behalf.
IOP 每 which currently does not allow AI use in peer reviews 每 said the survey showed a growing recognition that the technology has the potential to ※support, rather than replace, the peer review process§.
But publishers must fund ways to ※reconcile§ the two opposing viewpoints, the publisher added.
51勛圖
A solution could be developing tools that can operate within peer review software, it said, which could support reviewers without positing security or integrity risks.
Publishers should also be more explicit and transparent about why chatbots ※are not suitable tools for fully authoring peer review reports§, IOP said.
※These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing. As the technology continues to evolve, so too must the frameworks that support ethical and trustworthy peer review,§ Laura Feetham-Walker, reviewer engagement manager at IOP and lead author of the study, said.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 啦晨楚*莽 university and college rankings analysis
Already registered or a current subscriber?