The Verge - Artificial Intelligences 02月05日
Google scraps promise not to develop AI weapons
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

谷歌近日更新了其人工智能原则,删除了之前关于不将AI用于“造成或可能造成整体伤害”的承诺,包括禁止用于监控、武器和伤害人类的技术。与此同时,谷歌DeepMind CEO和谷歌技术与社会高级主管发布博客,强调AI原则将侧重于创新、合作和“负责任的”AI开发。谷歌此举旨在与Meta、OpenAI等竞争对手保持一致,这些公司的AI技术已被允许在某些军事领域使用。虽然谷歌曾承诺不开发AI武器,但此前曾参与过五角大楼的Project Maven项目,以及与以色列政府的Project Nimbus军事云合同,这些项目引发了员工对其是否违背AI原则的担忧。

🚫谷歌修改了AI原则,删除了之前关于不将AI用于“造成或可能造成整体伤害”的承诺,包括禁止用于监控、武器和伤害人类的技术。

🤝谷歌DeepMind CEO和谷歌技术与社会高级主管发布博客,强调AI原则将侧重于创新、合作和“负责任的”AI开发,但未作出具体承诺。

🌍谷歌认为,民主国家应在AI发展中发挥领导作用,并应由自由、平等和尊重人权等核心价值观指导。公司、政府和组织应共同努力,创造能够保护人民、促进全球增长和支持国家安全的AI。

⚔️尽管谷歌曾承诺不开发AI武器,但此前曾参与过五角大楼的Project Maven项目,以及与以色列政府的Project Nimbus军事云合同,这些项目引发了员工对其是否违背AI原则的担忧。

⚖️谷歌此举使其AI伦理准则与竞争对手更加一致。Meta的Llama和OpenAI的ChatGPT技术被允许在某些军事领域使用,亚马逊和政府软件制造商Palantir之间的协议使Anthropic能够将其Claude AI出售给美国军方和情报部门。

Can’t break promises if you riscind them entirely.

Google updated its artificial intelligence principles on Tuesday to remove commitments around not using the technology in ways “that cause or are likely to cause overall harm.” A scrubbed section of the revised AI ethics guidelines previously committed Google to not designing or deploying AI for use in surveillance, weapons, and technology intended to injure people. The change was first spotted by The Washington Post and captured here by the Internet Archive.

Coinciding with these changes, Google DeepMind CEO Demis Hassabis, and Google’s senior exec for technology and society James Manyika published a blog post detailing new “core tenets” that its AI principles would focus on. These include innovation, collaboration, and “responsible” AI development — the latter making no specific commitments.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” reads the blog post. “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Hassabis joined Google after it acquired DeepMind in 2014. In an interview with Wired in 2015, he said that the acquisition included terms that prevented DeepMind technology from being used in military or surveillance applications. 

While Google had pledged not to develop AI weapons, the company has worked on various military contracts, including Project Maven — a 2018 Pentagon project that saw Google using AI to help analyze drone footage — and its 2021 Project Nimbus military cloud contract with the Israeli government. These agreements, made long before AI developed into what it is today, caused contention among employees within Google who believed the agreements violated the company’s AI principles.

Google’s updated ethical guidelines around AI bring it more in line with competing AI developers. Meta’s Llama and OpenAI’s ChatGPT tech are permitted for some instances of military use, and a deal between Amazon and government software maker Palantir enables Anthropic to sell its Claude AI to US military and intelligence customers.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

谷歌 人工智能 AI伦理 军事应用
相关文章