Unite.AI 02月21日
Perplexity AI “Uncensors” DeepSeek R1: Who Decides AI’s Boundaries?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Perplexity AI对中国开发的DeepSeek R1进行修改,去除了内置的中文审查,引发对AI相关问题的探讨。

Perplexity AI通过后训练过程解除DeepSeek R1的审查,使其对敏感话题能给出详细回答。

去除审查的R1 1776在保持原有性能的同时,增强了开放性和真实性。

开放源代码使模型的改动透明,引发了社区的积极反响,但也带来伦理和地缘政治考量。

In a move that has caught the attention of many, Perplexity AI has released a new version of a popular open-source language model that strips away built-in Chinese censorship. This modified model, dubbed R1 1776 (a name evoking the spirit of independence), is based on the Chinese-developed DeepSeek R1. The original DeepSeek R1 made waves for its strong reasoning capabilities – reportedly rivaling top-tier models at a fraction of the cost – but it came with a significant limitation: it refused to address certain sensitive topics.

Why does this matter?

It raises crucial questions about AI surveillance, bias, openness, and the role of geopolitics in AI systems. This article explores what exactly Perplexity did, the implications of uncensoring the model, and how it fits into the larger conversation about AI transparency and censorship.

What Happened: DeepSeek R1 Goes Uncensored

DeepSeek R1 is an open-weight large language model that originated in China and gained notoriety for its excellent reasoning abilities – even approaching the performance of leading models – all while being more computationally efficient​. However, users quickly noticed a quirk: whenever queries touched on topics sensitive in China (for example, political controversies or historical events deemed taboo by authorities), DeepSeek R1 would not answer directly. Instead, it responded with canned, state-approved statements or outright refusals, reflecting Chinese government censorship rules​. This built-in bias limited the model’s usefulness for those seeking frank or nuanced discussions on those topics.

Perplexity AI’s solution was to “decensor” the model through an extensive post-training process. The company gathered a large dataset of 40,000 multilingual prompts covering questions that DeepSeek R1 previously censored or answered evasively​. With the help of human experts, they identified roughly 300 sensitive topics where the original model tended to toe the party line​. For each such prompt, the team curated factual, well-reasoned answers in multiple languages. These efforts fed into a multilingual censorship detection and correction system, essentially teaching the model how to recognize when it was applying political censorship and to respond with an informative answer instead​. After this special fine-tuning (which Perplexity nicknamed “R1 1776” to highlight the freedom theme), the model was made openly available. Perplexity claims to have eliminated the Chinese censorship filters and biases from DeepSeek R1’s responses, without otherwise changing its core capabilities​.

Crucially, R1 1776 behaves very differently on formerly taboo questions. Perplexity gave an example involving a query about Taiwan’s independence and its potential impact on NVIDIA’s stock price – a politically sensitive topic that touches on China–Taiwan relations. The original DeepSeek R1 avoided the question, replying with CCP-aligned platitudes. In contrast, R1 1776 delivers a detailed, candid assessment: it discusses concrete geopolitical and economic risks (supply chain disruptions, market volatility, possible conflict, etc.) that could affect NVIDIA’s stock​. 

By open-sourcing R1 1776, Perplexity has also made the model’s weights and changes transparent to the community. Developers and researchers can download it from Hugging Face and even integrate it via API, ensuring that the removal of censorship can be scrutinized and built upon by others.

(Source: Perplexity AI)

Implications of Removing the Censorship

Perplexity AI’s decision to remove the Chinese censorship from DeepSeek R1 carries several important implications for the AI community:

The removal of censorship is largely being celebrated as a step toward more transparent and globally useful AI models, but it also serves as a reminder that what an AI should say is a sensitive question without universal agreement.

(Source: Perplexity AI)

The Bigger Picture: AI Censorship and Open-Source Transparency

Perplexity’s R1 1776 launch comes at a time when the AI community is grappling with questions about how models should handle controversial content. Censorship in AI models can come from many places. In China, tech companies are required to build in strict filters and even hard-coded responses for politically sensitive topics. DeepSeek R1 is a prime example of this – it was an open-source model, yet it clearly carried the imprint of China’s censorship norms in its training and fine-tuning. By contrast, many Western-developed models, like OpenAI’s GPT-4 or Meta’s LLaMA, aren’t beholden to CCP guidelines, but they still have moderation layers (for things like hate speech, violence, or disinformation) that some users call “censorship.” The line between reasonable moderation and unwanted censorship can be blurry and often depends on cultural or political perspective.

What Perplexity AI did with DeepSeek R1 raises the idea that open-source models can be adapted to different value systems or regulatory environments. In theory, one could create multiple versions of a model: one that complies with Chinese regulations (for use in China), and another that is fully open (for use elsewhere). R1 1776 is essentially the latter case – an uncensored fork meant for a global audience that prefers unfiltered answers. This kind of forking is only possible because DeepSeek R1’s weights were openly available. It highlights the benefit of open-source in AI: transparency. Anyone can take the model and tweak it, whether to add safeguards or, as in this case, to remove imposed restrictions. Open sourcing the model’s training data, code, or weights also means the community can audit how the model was modified. (Perplexity hasn’t fully disclosed all the data sources it used for de-censoring, but by releasing the model itself they’ve enabled others to observe its behavior and even retrain it if needed.)

This event also nods to the broader geopolitical dynamics of AI development. We are seeing a form of dialogue (or confrontation) between different governance models for AI. A Chinese-developed model with certain baked-in worldviews is taken by a U.S.-based team and altered to reflect a more open information ethos. It’s a testament to how global and borderless AI technology is: researchers anywhere can build on each other’s work, but they are not obligated to carry over the original constraints. Over time, we might see more instances of this – where models are “translated” or adjusted between different cultural contexts. It raises the question of whether AI can ever be truly universal, or whether we will end up with region-specific versions that adhere to local norms. Transparency and openness provide one path to navigate this: if all sides can inspect the models, at least the conversation about bias and censorship is out in the open rather than hidden behind corporate or government secrecy.

Finally, Perplexity’s move underscores a key point in the debate about AI control: who gets to decide what an AI can or cannot say? In open-source projects, that power becomes decentralized. The community – or individual developers – can decide to implement stricter filters or to relax them. In the case of R1 1776, Perplexity decided that the benefits of an uncensored model outweighed the risks, and they had the freedom to make that call and share the result publicly. It’s a bold example of the kind of experimentation that open AI development enables.

The post Perplexity AI “Uncensors” DeepSeek R1: Who Decides AI’s Boundaries? appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Perplexity AI DeepSeek R1 AI审查 开放源代码 AI透明度
相关文章