TechCrunch News 03月06日
Anthropic quietly removes Biden-era AI policy commitments from its website
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Anthropic公司近日从其网站上悄然删除了2023年与拜登政府共同制定的若干自愿承诺,这些承诺旨在促进安全和“值得信赖”的AI。这些承诺包括在行业和政府间分享AI风险管理信息,以及研究AI偏见和歧视。此举引发了AI监管机构的关注。与此同时,其他与减少AI生成图像性虐待相关的拜登时代承诺仍然保留。Anthropic公司似乎没有事先通知这一变化。此外,OpenAI也调整了其公共政策,拥抱“知识自由”,并取消了其网站上表达对多元化、公平和包容承诺的页面。这些变化正值特朗普政府上任,其对AI治理的态度与前任政府截然不同。

🤝Anthropic公司撤回了与拜登政府在2023年达成的自愿承诺,这些承诺旨在促进行业和政府间AI风险管理信息的共享,以及对AI偏见和歧视的研究。

🛡️尽管Anthropic公司已经采取了部分承诺中的措施,且该协议不具有法律约束力,但拜登政府希望借此表明其AI政策重点,为后续更全面的AI行政命令做铺垫。特朗普政府对AI治理的态度则大相径庭。

🚫特朗普政府废除了拜登的AI行政命令,该命令曾指示美国国家标准与技术研究院制定指南,帮助企业识别和纠正模型中的缺陷,包括偏见。特朗普政府还发布了一项命令,指示联邦机构促进“免于意识形态偏见”的AI发展。

💡OpenAI也调整了其公共政策,宣布将拥抱“知识自由”,并努力确保其AI不会审查某些观点。同时,OpenAI还删除了其网站上表达对多元化、公平和包容(DEI)承诺的页面。

Anthropic has quietly removed from its website several voluntary commitments the company made in conjunction with the Biden Administration in 2023 to promote safe and “trustworthy” AI.

The commitments, which included pledges to share information on managing AI risks across industry and government and research on AI bias and discrimination, were deleted from Anthropic’s transparency hub last week, according to AI watchdog group The Midas Project. Other Biden-era commitments relating to reducing AI-generated image-based sexual abuse remain.

Anthropic appears to have given no notice of the change. The company didn’t immediately respond to a request for comment.

Anthropic, along with companies including OpenAI, Google, Microsoft, Meta, and Inflection, announced in July 2023 that it had agreed to adhere to certain voluntary AI safety commitments proposed by the Biden Administration. The commitments included internal and external security tests of AI systems before release, investing in cybersecurity to protect sensitive AI data, and developing methods of watermarking AI-generated content.

To be clear, Anthropic had already adopted a number of the practices outlined in the commitments, and the accord wasn’t legally binding. But the Biden Administration’s intent was to signal its AI policy priorities ahead of the more exhaustive AI Executive Order, which came into force several months later.

The Trump Administration has indicated that its approach to AI governance will be quite different.

In January, President Trump repealed the aforementioned AI Executive Order, which had instructed the National Institute of Standards and Technology to author guidance that helps companies identify — and correct — flaws in models, including biases. Critics allied with Trump argued that the order’s reporting requirements were onerous and effectively forced companies to disclose their trade secrets.

Shortly after revoking the AI Executive Order, Trump signed an order directing federal agencies to promote the development of AI “free from ideological bias” that promotes “human flourishing, economic competitiveness, and national security.” Importantly, Trump’s order made no mention of combatting AI discrimination, which was a key tenet of Biden’s initiative.

As The Midas Project noted in a series of posts on X, nothing in the Biden-era commitments suggested that the promise was time-bound or contingent on the party affiliation of the sitting president. In November, following the election, multiple AI companies confirmed that their commitments hadn’t changed.

Anthropic isn’t the only firm to adjust its public policies in the months since Trump took office. OpenAI recently announced it would embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” and work to ensure that its AI doesn’t censor certain viewpoints.

OpenAI also scrubbed a page on its website that used to express the startup’s commitment to diversity, equity, and inclusion, or DEI. These programs have come under fire from the Trump Administration, leading a number of companies to eliminate or substantially retool their DEI initiatives.

Many of Trump’s Silicon Valley advisers on AI, including Marc Andreessen, David Sacks, and Elon Musk, have alleged that companies, including Google and OpenAI, have engaged in AI censorship by limiting their AI chatbots’ answers. Labs including OpenAI have denied that their policy changes are in response to political pressure.

Both OpenAI and Anthropic have or are actively pursuing government contracts.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Anthropic AI安全承诺 拜登政府 特朗普政府 OpenAI
相关文章