Communications of the ACM - Artificial Intelligence 前天 22:57
Evaluating Alternative Ideas Might Get Us Away From Siloed Positions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了如何利用人工智能(AI)辅助人们辨别信息真伪,从而减少偏见和片面思考。文章指出,人们在阅读和写作时,容易受到个人先入为主的观念影响,甚至轻信他人。通过结合AI技术,例如,标注文章中具有争议或热门的内容,并提供多方观点,可以帮助读者更全面地理解信息,促进批判性思考。文章还提到了一个实际的原型,展示了AI在识别和呈现争议内容方面的应用,旨在推动人们形成更客观的认知。

🤔 许多人容易轻信从各种来源获得的信息,包括朋友和网络内容,但很少深入考察其真实性。而AI技术可以帮助我们更客观地评估信息。

💡 AI生成的回复通常涵盖广泛的主题,但并非完美。通过结合其他工具和信息源,可以提高其准确性。研究表明,与AI对话可以有效减少阴谋论的信念,这说明人们在某些情况下更信任AI的权威。

🧐 AI可以用于辅助写作和阅读,帮助我们了解观点的真实性和普遍接受程度。例如,通过标记文章中热门或有争议的内容,并提供相关信息,可以帮助我们形成更全面的理解。

🔴 一个实际的原型案例展示了AI如何标记和呈现有争议的内容。当选中红色高亮文本时,会弹出“高度争议内容”的窗口,提供支持和反对的观点,从而帮助读者更全面地理解信息。

Do people believe everything they read, even from a respected blogger? Do we have the energy to read different sources of news with more than one political slant? When people ask a trusted friend about something, are their statements reliable? How do they, and should they, determine what is true in what is said? Every Wikipedia article has a history section one can view that shows all the changes people have made. Even though it contains valuable history of how the article came to its current state, which often includes intrigue and controversy, only serious editors tend to view it.

Authors ask people for help on ideas and manuscript drafts, but don’t accept all their suggestions. A user’s requests of AI are formed from and often expose their preconceptions and how they want to hear about things and, as a human listener, attempts to pay some homage to their perspective. GenAI draws from everything it has seen, responding with broad coverage of the topics in an organized presentation. Users get lots of value from its (imperfect) responses. As we have others to check our work and ideas, we need to check it and maybe to ask other AI tools like Perplexity to help give references for the AI responses. Still, maybe we believe what it says better than we believe what people say. The paper “Durably reducing conspiracy beliefs through dialogues with AI” (https://www.science.org/doi/10.1126/science.adq1814) showed that while talking to people about their conspiracy beliefs didn’t change their minds, GenAI’s criticism was more trusted, reducing conspiracy theory beliefs by 20%. We might talk of not trusting AI, but in some cases at least we trust its authority more than we trust people.  

Like GenAI, our ideas don’t usually come from nowhere; they are often things we casually heard about that might not be right or controversial. I imagine letting GenAI review text we are writing or reading to help us understand where the ideas we are writing stand in terms of factual basis or general acceptance. How timely our ideas are and how much our ideas fit with what others are saying is important. 

Knowing that something is trending or knowing alternative ideas that people are considering might even be as important as factualness. While writing or reading users might use a simple Web search to tell if something is trending. The user interface could color such things purple in the email being drafted or received. AI could be used to ask if something has lots of written material for and/or against it. The user interface could color such things shades of red to make us aware of how controversial it is; pink if a little controversial and red if very controversial. When a purple or red link is selected, AI shows collected examples as a basis for the trending or controversialness of statements people are making. A curious writer or reader can click on the colored phrase like a link and find themselves learning more regarding the things people say about that phrase.

So while people and GenAI can build believable fantasy worlds, they might also check facts and call bullshit.

Justin Gregg’s book “If Nietzsche Were a Narwhal” speaks to the difference between bullshit and lying. Lying is when you know you are wrong and misspeak, bullshit is when you don’t care to know the truth and just say what you want to believe. This blog is a call for using AI to help people consider where they might have made things up for convenience but would be better off knowing that not everyone agrees with them. 

The idea of this blog is to imagine a tool that uses AI to encourage people to see perspectives on their writing and reading. Just as conspiracy theories were reduced by seeing alternative perspectives presented by AI, using AI to see alternative perspectives for everything we write and read may be useful in helping us reduce our siloed and biased thinking.

Here is an example from an actual prototype: 

As shown, my chrome extension has highlighted in red the text “With the Union’s victory, slavery was abolished nationally.” When the red text is  selected, the “Highly Controversial Content” popup is displayed with “Supporting Points” and “Opposing Points.”

Ted Selker is a computer scientist and student of interfaces.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 信息真伪 批判性思维 AI辅助写作 争议内容
相关文章