Communications of the ACM - Artificial Intelligence 2024年11月26日
Warnings!
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了在2024年10月底,人工智能及社交媒体推荐算法等引发的问题,如信息误传等。提到对警告的敏感性可能是遗传生存特征,社交媒体利用推荐算法引导用户,强调批判性思维训练的重要性,以及信息评估和过滤的需求,还指出在保护隐私的同时需解决信息滥用问题。

🎯对警告的敏感性可能是遗传生存特征,人类也会对警告自动做出反应。

📱社交媒体利用推荐算法引导用户,包括向他们推送感兴趣或担忧的信息。

💡强调批判性思维训练的重要性,以应对信息混杂的现状。

🔍信息评估和过滤的需求增加,数字签名和信息源可靠注册或可提供帮助。

🛡️在保护隐私的同时,需解决信息滥用问题,身份、来源和责任很重要。

As I write this at the end of October 2024, artificial intelligence (AI) continues to be Topic A in many discussions. So too are recommendation algorithms in social media. Misinformation and disinformation rank high across many areas of socio-economic concerns. We are even seeing misinformation about the Federal response to severe storms interfering with our ability to render aid. Why is it that we are attracted to and respond so readily to alarming information?

I have a rather unscientific theory about this. Well, it isn’t grounded in solid data, but it is a cartoon model of the way I think of the phenomenon. I think sensitivity to warnings is likely a genetic survival trait for all species, especially those with some level of cognition. I include non-human species in that category. Warning calls are common across many species. Humans have benefited from such warnings by surviving to contribute to the gene pool. Many who ignored warnings did not survive and did not contribute. Thus, when we read, see, or hear warnings, we respond almost automatically. “It’s a bear! Run!” (Actually, I hear running from a bear is actually bad advice.)

Social media influencers take advantage of recommendation algorithms that steer users toward perceived interests and the scale at which these systems operate. The same mechanisms that might select advertisements of interest may also steer users toward information, including warnings that appear to be of interest or concern. None of this is a new realization. My long-time friend and colleague, Peter G. Neumann, drew attention to this in a 2001 Communications articlea which is as relevant now as it was then, maybe even more so.

This is not the first time I have written about this phenomenon. The mix of accurate and inaccurate and deliberately misleading information reinforces my belief that training in critical thinking is needed now more than ever. We rely on many more sources of information today than we have in the past, in part because virtually anyone who has access to the Internet and World Wide Web is in a position to post his or her views to a global audience. In the past, fewer sources might have meant that information consumers could exercise more due diligence on the sources they chose to rely on. The proliferation of sources increases the need for and utility of provenance of content and concomitant assessment of sources.

This kind of filtering is not new. We don’t read every book, newspaper, or magazine; watch every movie or television show; or listen to every broadcast. We don’t even pay attention to every social media site on the ‘Net. We select these based on recommendations from parties we trust, often including our friends or organizations we belong to.

We could use some technical help, however, as we wrestle to assess the provenance of the information we encounter. Digital signatures and reliable registration of information sources might help. Anonymous speech, while of value in some circumstances (such as whistleblowing), is generally prone to harmful abuse because the source may believe it is immune from the consequences of spreading disinformation. The problem is exacerbated by people who spread information without checking, either deliberately or out of naive belief that it is correct or relevant. Elections in this century have been affected by deliberate misinformation campaigns sourced anonymously or by parties whose identity is deliberately obscured.

I have become persuaded that identity, provenance, and accountability are our friends in this proliferated, online space. Of course, I subscribe to the idea that privacy is an important societal value but not at the expense of potential harms arising from the abuse of anonymity. The veil of anonymity may need to be pierced under the right judicial conditions. I am not in favor of so-called “backdoor” processes as they can be abused and have been in the recent past; for example, by hijacking wire-tapping provisions to gain unauthorized access to telephone conversations. I remember well the debate of the so-called “Clipper Chip” in the early 1990s that would have provided “authorized parties” with the ability to decrypt content encrypted by the chip. Eventually, some unauthorized party will find a way to abuse the capability.

Plainly, we computer technologists have work to do to help our societies cope with the potentially harmful effects of media scale while protecting the provisions of the Universal Declaration of Human Rights.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 信息误传 批判性思维 信息评估 隐私保护
相关文章