Astral Codex Ten Podcast feed 2024年07月17日
Moderation Is Different From Censorship
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了社交媒体平台上的内容管控与审查之间的区别,作者认为,管控是平台为了保障用户体验而采取的正常商业行为,例如屏蔽骚扰信息或虚假信息,而审查则是为了满足权力者的要求,即使用户不希望看到,也会强制性地屏蔽信息。作者以中国为例,说明了如果中国政府只进行管控,不进行审查,那么世界将会截然不同,因为任何中国人只要点击一个按钮就可以看到关于新疆、天安门广场、上海封锁或对习近平的批评等信息。

😄 **内容管控是保障用户体验的正常商业行为**:平台可以屏蔽骚扰信息或虚假信息,以确保用户拥有良好的使用体验。平台应根据用户需求进行内容管控,而不是为了满足权力者的要求而进行审查。

🤔 **审查是强制性地屏蔽信息,即使用户不希望看到**:如果发送者想要发送信息,接收者也想要接收,但第三方禁止信息交换,这就是审查。审查会扼杀言论自由,阻碍信息的传播。

🤯 **以中国为例说明审查的危害**:如果中国政府只进行内容管控,不进行审查,那么任何中国人只要点击一个按钮就可以看到关于新疆、天安门广场、上海封锁或对习近平的批评等信息。

💡 **内容管控和审查的区别在于用户选择权**:内容管控允许用户选择是否看到被屏蔽的内容,而审查则强制性地屏蔽信息,剥夺了用户的选择权。

👍 **内容管控可以防止信息生态的恶化**:在信息生态良好的情况下,内容管控可能显得微不足道,但它可以防止最严重的滥用,并为信息生态的健康发展设定一个底线。

https://astralcodexten.substack.com/p/moderation-is-different-from-censorship

This is a point I keep seeing people miss in the debate about social media.

Moderation is the normal business activity of ensuring that your customers like using your product. If a customer doesn’t want to receive harassing messages, or to be exposed to disinformation, then a business can provide them the service of a harassment-and-disinformation-free platform.

Censorship is the abnormal activity ensuring that people in power approve of the information on your platform, regardless of what your customers want. If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship.

The racket works by pretending these are the same imperative. “Well, lots of people will be unhappy if they see offensive content, so in order to keep the platform safe for those people, we’ve got to remove it for everybody.”

This is not true at all. A minimum viable product for moderation without censorship is for a platform to do exactly the same thing they’re doing now - remove all the same posts, ban all the same accounts - but have an opt-in setting, “see banned posts”. If you personally choose to see harassing and offensive content, you can toggle that setting, and everything bad will reappear. To “ban” an account would mean to prevent the half (or 75%, or 99%) of people who haven’t toggled that setting from seeing it. The people who elected to see banned posts could see them the same as always. Two “banned” accounts could still talk to each other, retweet each other, etc - as could accounts that hadn’t been banned, but had opted into the “see banned posts” setting.

Does this difference seem kind of pointless and trivial? Then imagine applying it to China. If the Chinese government couldn’t censor - only moderate - the world would look completely different. Any Chinese person could get accurate information on Xinjiang, Tiananmen Square, the Shanghai lockdowns, or the top fifty criticisms of Xi Jinping - just by clicking a button on their Weibo profile. Given how much trouble ordinary Chinese people go through to get around censors, probably many of them would click the button, and then they’d have a free information environment. This switch might seem trivial in a well-functioning information ecology, but it prevents the worst abuses, and places a floor on how bad things can get.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

社交媒体 内容管控 审查 言论自由 信息生态
相关文章