DailyAI | Exploring the World of Artificial Intelligence 03月27日 05:31
AI stirs up trouble in the science peer review process
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)在同行评审过程中引发的争议。随着AI生成内容的普及,学术界面临着AI生成的评审意见和AI生成的论文内容通过评审的风险。研究表明,部分评审可能由AI生成,且论文中出现由AI创作的荒谬内容。出版商正在采取应对措施,如禁止或限制AI的使用,但AI的吸引力依然存在。文章探讨了AI在同行评审中的潜在影响,以及学术界对维护传统评审制度的坚持。

🤔 AI生成内容的出现对同行评审构成了双重威胁:一是审稿人使用AI生成评审意见,二是AI生成的论文内容通过评审。Timothée Poisot 收到由ChatGPT生成的评审意见后表示愤怒,认为这破坏了同行评审的社会契约。

📊 研究表明,AI在同行评审中的使用已初现端倪。Nature 发表的一项研究发现,2023-24年AI会议论文的评审中,高达17%的评审意见显示出被大型语言模型显著修改的迹象。另有Nature 调查显示,近五分之一的研究人员承认使用AI来加速和简化同行评审过程。

🖼️ AI生成的内容可能严重影响论文质量。2024年发表在 Frontiers 杂志上的一篇论文,其包含的图表由AI艺术工具 Midjourney 生成,出现了变形的图像和无意义的文字。这些明显有问题的图表通过了同行评审,引发了广泛争议。

⚖️ 出版商对AI在同行评审中的使用采取了不同策略。Elsevier 完全禁止在同行评审中使用生成式AI;Wiley 和 Springer Nature 允许“有限使用”,但要求披露;美国物理学会正在试用AI工具来辅助(而非取代)人工反馈。

👍 尽管存在争议,部分研究人员对AI生成的评审持积极态度。斯坦福大学的一项研究发现,40%的科学家认为ChatGPT对他们工作的评审意见与人类评审一样有帮助,甚至有20%的人认为更有帮助。

Scientific publishing in confronting an increasingly provocative issue: what do you do about AI in peer review? 

Ecologist Timothée Poisot recently received a review that was clearly generated by ChatGPT. The document had the following telltale string of words attached: “Here is a revised version of your review with improved clarity and structure.” 

Poisot was incensed. “I submit a manuscript for review in the hope of getting comments from my peers,” he fumed in a blog post. “If this assumption is not met, the entire social contract of peer review is gone.”

Poisot’s experience is not an isolated incident. A recent study published in Nature found that up to 17% of reviews for AI conference papers in 2023-24 showed signs of substantial modification by language models.

And in a separate Nature survey, nearly one in five researchers admitted to using AI to speed up and ease the peer review process.

We’ve also seen a few absurd cases of what happens when AI-generated content slips through the peer review process, which is designed to uphold the quality of research. 

In 2024, a paper published in the Frontiers journal, which explored some highly complex cell signaling pathways, was found to contain bizarre, nonsensical diagrams generated by the AI art tool Midjourney. 

One image depicted a deformed rat, while others were just random swirls and squiggles, filled with gibberish text.

This nonsense AI-generated diagram appeared in the peer-reviewed paper Frontiers in 2024. Source: Frontiers.

Commenters on Twitter were aghast that such obviously flawed figures made it through peer review. “Erm, how did Figure 1 get past a peer reviewer?!” one asked. 

In essence, there are two risks: a) peer reviewers using AI to review content, and b) AI-generated content slipping through the entire peer review process. 

Publishers are responding to the issues. Elsevier has banned generative AI in peer review outright. Wiley and Springer Nature allow “limited use” with disclosure. A few, like the American Institute of Physics, are gingerly piloting AI tools to supplement – but not supplant – human feedback.

However, gen AI’s allure is strong, and some see the benefits if applied judiciously. A Stanford study found 40% of scientists felt ChatGPT reviews of their work could be as helpful as human ones, and 20% more helpful.

Researchers often have positive reactions to AI-generated peer reviews. Source: Nature

Academia has revolved around human input for a millenia, though, so the resistance is strong. “Not combating automated reviews means we have given up,” Poisot wrote.

The whole point of peer review, many argue, is considered feedback from fellow experts – not an algorithmic rubber stamp.

The post AI stirs up trouble in the science peer review process appeared first on DailyAI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 同行评审 学术出版 AI生成内容 学术诚信
相关文章