Mashable 06月02日 01:06
Meta reportedly replacing human risk assessors with AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

据NPR审查的内部文件显示,Meta正计划用人工智能取代人类风险评估员,加速实现全面自动化。过去,Meta依赖人工分析师评估其平台新技术(包括算法更新和安全功能)的潜在风险。然而,Meta计划用AI接管90%的此类工作。尽管此前声明AI仅用于评估“低风险”发布,但Meta现在将其应用于AI安全、青少年风险和内容完整性(包括虚假信息和暴力内容审核)的决策。内部人士担心,自动化可能加速应用更新,但也可能对数十亿用户构成更大风险,包括数据隐私的潜在威胁。

⚠️Meta计划用AI取代人类风险评估员:Meta正逐步实现全面自动化,计划用人工智能取代人类风险评估员,以评估其平台新技术带来的潜在风险。

🤖AI将应用于关键决策领域:尽管此前声明AI仅用于评估“低风险”发布,但Meta现在将其应用于AI安全、青少年风险和内容完整性(包括虚假信息和暴力内容审核)的决策。

⚡自动化可能加速应用更新:自动化可能会加速应用更新和开发者发布,以符合Meta的效率目标。产品团队提交问卷后,将即时获得风险决策和建议,工程师将承担更大的决策权。

🛡️内部人士担忧用户风险:内部人士担心,自动化可能对数十亿用户构成更大风险,包括数据隐私的潜在威胁。Meta的监督委员会强调,Meta必须识别并解决这些变化可能对人权造成的负面影响。

🔍Meta此前已关闭人工事实核查项目:Meta此前关闭了人工事实核查项目,转而采用社区笔记,并更依赖其内容审核算法,而该算法已知会遗漏或错误标记虚假信息以及违反公司内容政策的帖子。

According to new internal documents review by NPR, Meta is allegedly planning to replace human risk assessors with AI, as the company edges closer to complete automation.

Historically, Meta has relied on human analysts to evaluate the potential harms posed by new technologies across its platforms, including updates to the algorithm and safety features, part of a process known as privacy and integrity reviews.

But in the near future, these essential assessments may be taken over by bots, as the company looks to automate 90 percent of this work using artificial intelligence.

Despite previously stating that AI would only be used to assess "low-risk" releases, Meta is now rolling out use of the tech in decisions on AI safety, youth risk, and integrity, which includes misinformation and violent content moderation, reported NPR. Under the new system, product teams submit questionnaires and receive instant risk decisions and recommendations, with engineers taking on greater decision-making powers.

While the automation may speed up app updates and developer releases in line with Meta's efficiency goals, insiders say it may also pose a greater risk to billions of users, including unnecessary threats to data privacy.

In April, Meta's oversight board published a series of decisions that simultaneously validated the company's stance on allowing "controversial" speech and rebuked the tech giant for its content moderation policies.

"As these changes are being rolled out globally, the Board emphasizes it is now essential that Meta identifies and addresses adverse impacts on human rights that may result from them," the decision reads. "This should include assessing whether reducing its reliance on automated detection of policy violations could have uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts."

Earlier that month, Meta shuttered its human fact-checking program, replacing it with crowd-sourced Community Notes and relying more heavily on its content-moderating algorithm — internal tech that is known to miss and incorrectly flag misinformation and other posts that violate the company's recently overhauled content policies.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Meta 人工智能 风险评估 自动化 内容审核
相关文章