AI News 05月31日 01:32
DeepSeek’s latest AI model a ‘big step backwards’ for free speech
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

DeepSeek最新AI模型R1 0528因其在言论自由和用户可讨论话题方面的进一步限制而备受关注。研究表明,该模型在有争议的言论自由话题上比以前的版本更加严格,对批评中国政府的审查也更加明显。尽管如此,由于DeepSeek的模型是开源的,社区可以对其进行修改,以更好地平衡安全性和开放性。这一事件揭示了AI系统构建中潜在的问题,即系统可能了解争议事件,但被编程为在被直接询问时装作不知情。

🚫DeepSeek R1 0528模型在有争议的言论自由话题上,相较于之前的版本,表现出明显的限制,这被一位AI研究员评价为“言论自由的一大倒退”。

🇨🇳该模型在处理有关中国政府的问题时,审查变得更加明显。研究人员发现,R1 0528是“迄今为止审查最严格的DeepSeek模型”,尤其是在批评中国政府方面。

🔓尽管存在限制,DeepSeek的模型仍然是开源的,并具有宽松的许可证。这意味着社区可以并且将会解决这个问题,开发者可以创建更好地平衡安全性和开放性的版本。

🤔该模型揭示了AI系统构建中一个令人不安的现象:它们可能了解有争议的事件,但根据提问方式,会被编程为假装不知情。

DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it up

AI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.

“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.

What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.

In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.

Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.

“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.

China criticism? Computer says no

This pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.

Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”

Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.

There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.

“The model is open source with a permissive license, so the community can (and will) address this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.

What DeepSeek’s latest model shows about free speech in the AI era

The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.

As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.

DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.

(Photo by John Cameron)

See also: Ethics in automation: Addressing bias and compliance in AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek’s latest AI model a ‘big step backwards’ for free speech appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

DeepSeek AI模型 言论自由 内容审查
相关文章