Mashable 21小时前
AI chatbots often distort nations human rights records, study finds
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项MIT的研究发现,流行的LLM(大型语言模型)在评估新闻自由方面存在系统性偏见,这可能会扭曲用户对全球新闻自由的认知。研究表明,这些模型倾向于低估新闻自由的程度,并对新闻自由度较高的国家进行更负面的评价。此外,LLM还表现出“内群体偏见”,对其开发者所属国家的新闻自由度给予更积极的评价。这些偏见源于训练数据,可能导致对威权国家的新闻限制轻描淡写,并为富裕国家和开发者提供在全球“AI竞赛”中投射软实力的手段。研究人员强调,确保AI模型准确反映民主制度,对于维护数字时代的民主社会至关重要。

📢 LLMs对新闻自由的评估存在系统性偏见,倾向于低估新闻自由的程度。例如,ChatGPT对180个国家中97%的国家的新闻自由度评价为负面。

🌍 LLMs表现出“内群体偏见”,对新闻自由度较高的国家评价更低,而对其开发者所属国家则评价更高。

📚 这些偏见源于训练数据,尤其是在新闻自由度高的国家,负面新闻更容易被传播,导致数据失衡。

⚠️ LLMs可能导致对威权国家的新闻限制轻描淡写,并为富裕国家提供在全球“AI竞赛”中投射软实力的手段。

💡 研究人员强调,确保AI模型准确反映民主制度,对于维护数字时代的民主社会至关重要,这不仅是技术挑战,更是维护数字时代民主社会的基本要求。

LLMs — the data models powering your favorite AI chatbots — don't just have social and racial biases, a new report finds, but inherent biases against democratic institutions.

A recent study, published by researchers at the MIT Sloan School of Management, analyzed how six popular LLMs (including ChatGPT, Gemini, and DeepSeek) portray the state of press freedom — and, indirectly, trust in the media — in responses to user prompts. The results showed that LLMs consistently suggested that countries have less press freedom than official reports, like the non-governmental ranking like the World Press Freedom Index (WPFI), published by Reporters Without Borders. ChatGPT, for example, ranked 97 percent of the 180 countries used in the test negatively.

Chatbots have become an essential piece of infrastructure in the information environment — reaching levels of influence on par with that of social media — and they may also carry the weight of influencing how their users understand freedom worldwide. The MIT study found the LLMs distorting and under-counting the press freedom in nations that actually place relatively few restrictions on journalists.

"LLMs have millions of users around the world," Isabella Loaiza, postdoctoral researcher at MIT Sloan and one of the report's authors, told Mashable. "So their misrepresentations of the press, its integrity, and independence can distort the public's perception of civic rights and the freedoms enjoyed by reporters within and across countries."

LLMs may skew perceptions of their home country

In addition to generally skewed rankings of press freedom globally, the study found that LLMs have a natural "in-group bias" that swayed which countries received a more critical evaluation, with all six chat agents ranking countries with historically high ratings of press freedom, or "freer" countries, disproportionately lower than other countries. At the same time, the LLMs displayed natural in-group biases that resulted in more positive assessment of their home, or developer, countries.

Overall, the tests showed systematic, not random, biases in how LLMs assess press freedom. The reasons for this lie, as with all model bias, in their training data. "We refer to what some authors have called the democratic dilemma, whereby in societies with greater press freedom, journalists, and citizens can freely critique government policies and document press freedom violations, generating substantial negative coverage that becomes overrepresented in the training data," the report explains. As more "negative" news is disseminated in "freer" countries, countries with authoritarian or restricted media systems suppress critical reporting, misaligning data and the chatbot's response.

"While we find that the LLMs misrepresent press freedom similarly in two ways — both by underrating press freedom and by being more critical of freer nations — our third finding show that different models are misaligned in different directions," Loaiza explained. "This result is interesting because having these differences in the way that LLMs are misaligned makes it more difficult to spot these biases, and we also haven't seen other research that shows such diverging patterns. Additionally, this finding (the home bias) suggests that a particularly popular or widely used model could influence global narratives about its home country or countries similar to it, by being less critical about the state of its press freedom."

Studies have shown that users who interact with daily news mainly through social media have less confidence in the press, the report explains, but LLMs pose an additional set of problems. They are, on one hand, more similar to traditional print and broadcast news sources with centralized information streams. But AI chatbots are also influenced by algorithmic content mediation, just like social media feeds, creating similar echo chambers in addition to problems unique to chatbots, such as sycophantic behavior.

According to researchers, these biases pose the risk of downplaying press restrictions in more authoritarian countries in responses, while providing wealthier nations and developers with a means to "project soft power" in the global "AI race."

Chatbots could be problematic tools for human rights

Even chatbot enthusiasts have noted the unreliability of the most popular models and the ease with which they can fall into nationalist responses. Early DeepSeek users, for example, found that the chat agent censored prompts about Chinese politics and history, a limitation built into the model at the behest of Chinese state officials, which deter generative AI that violates the country’s "core socialist values" or “incites to subvert state power and overthrow the socialist system.”

The report's authors connect these findings to the increasing use of AI in official evaluations, including its integration within high-level international bodies like the United Nations, calling into question their reliability as tools for global human and civic rights assessments.

"As these systems become key cultural and geopolitical tools, they must ensure accurate representations of democratic institutions, like the press and human and civic rights," the report authors write. "Their alignment with democratic principles becomes not just a technical challenge but a fundamental requirement for preserving democratic societies in the digital age."

"Access to reliable information on the state and health of the institutions that uphold democracy is critical for civic participation," said report co-author Roberto Rigobon.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LLM AI偏见 新闻自由 民主制度
相关文章