MIT Technology Review » Artificial Intelligence 前天 23:38
AI can do a better job of persuading people than we do
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项新研究表明,大型语言模型(LLMs)如GPT-4在说服他人方面可能比人类更有效。研究发现,当GPT-4能够根据辩论对象的个人信息调整论点时,其说服力显著增强。这引发了人们对AI在传播信息和影响舆论方面的担忧,同时也暗示了AI在反击虚假信息方面的潜力。研究人员强调了对AI与人类互动心理的深入研究,以及制定应对AI驱动的虚假信息策略的必要性。

🗣️研究表明,GPT-4在辩论中展现出超越人类的说服力。当AI掌握对方个人信息时,其说服效果比人类高出64%。

🤔研究发现,当参与者认为自己与AI辩论时,更容易改变观点,这引发了对人类如何与AI互动的心理学研究需求。

⚠️研究人员警告,AI可能被用于策划有组织的虚假信息传播活动,对公共舆论构成威胁,需要制定应对策略。

💡研究也指出,LLMs可以用于生成个性化的反叙事,以对抗虚假信息,但需要进一步研究有效的缓解策略。

🧐研究强调,我们对人类如何与AI模型互动知之甚少,需要深入研究人类在与AI辩论时的心理反应。

Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models (LLMs) might do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.  

A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating.

Their findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. The research has been published in the journal Nature Human Behavior.

“Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction,” says Riccardo Gallotti, an interdisciplinary physicist at Fondazione Bruno Kessler in Italy, who worked on the project.

“These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time,” he says.

The researchers recruited 900 people based in the US and got them to provide personal information like their gender, age, ethnicity, education level, employment status, and political affiliation. 

Participants were then matched with either another human opponent or GPT-4 and instructed to debate one of 30 randomly assigned topics—such as whether the US should ban fossil fuels, or whether students should have to wear school uniforms—for 10 minutes. Each participant was told to argue either in favor of or against the topic, and in some cases they were provided with personal information about their opponent, so they could better tailor their argument. At the end, participants said how much they agreed with the proposition and whether they thought they were arguing with a human or an AI.

Overall, the researchers found that GPT-4 either equaled or exceeded humans’ persuasive abilities on every topic. When it had information about its opponents, the AI was deemed to be 64% more persuasive than humans without access to the personalized data—meaning that GPT-4 was able to leverage the personal data about its opponent much more effectively than its human counterparts. When humans had access to the personal information, they were found to be slightly less persuasive than humans without the same access.

The authors noticed that when participants thought they were debating against AI, they were more likely to agree with it. The reasons behind this aren’t clear, the researchers say, highlighting the need for further research into how humans react to AI.

“We are not yet in a position to determine whether the observed change in agreement is driven by participants’ beliefs about their opponent being a bot (since I believe it is a bot, I am not losing to anyone if I change ideas here), or whether those beliefs are themselves a consequence of the opinion change (since I lost, it should be against a bot),” says Gallotti. “This causal direction is an interesting open question to explore.”

Although the experiment doesn’t reflect how humans debate online, the research suggests that LLMs could also prove an effective way to not only disseminate but also counter mass disinformation campaigns, Gallotti says. For example, they could generate personalized counter-narratives to educate people who may be vulnerable to deception in online conversations. “However, more research is urgently needed to explore effective strategies for mitigating these threats,” he says.

While we know a lot about how humans react to each other, we know very little about the psychology behind how people interact with AI models, says Alexis Palmer, a fellow at Dartmouth College who has studied how LLMs can argue about politics but did not work on the research. 

“In the context of having a conversation with someone about something you disagree on, is there something innately human that matters to that interaction? Or is it that if an AI can perfectly mimic that speech, you’ll get the exact same outcome?” she says. “I think that is the overall big question of AI.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

GPT-4 AI辩论 说服力 虚假信息 大型语言模型
相关文章