TechCrunch News 02月14日
UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

英国政府对人工智能的战略重心进行了重大调整,将AI安全研究院更名为“AI安全研究所”,标志着其关注点从探索大型语言模型中的存在风险和偏见,转向网络安全,特别是加强对国家安全和犯罪的人工智能风险的保护。与此同时,政府宣布与Anthropic建立新的合作伙伴关系,旨在探索在公共服务中使用Anthropic的AI助手Claude,并促进科学研究和经济建模。尽管战略调整,英国政府强调,人工智能安全问题并未被忽视,而是在追求技术进步的同时,确保公民免受AI恶意使用的威胁。

🛡️英国政府将AI安全研究院更名为“AI安全研究所”,明确了其战略重心从关注AI的潜在风险转向利用AI促进经济增长和保障国家安全。

🤝政府与Anthropic建立合作关系,旨在探索如何利用Anthropic的AI助手Claude改进公共服务,提高信息和服务的效率及可访问性,同时Anthropic也将为科学研究和经济建模做出贡献。

🚨尽管更名,AI安全研究所的工作重心仍将是评估AI对公众造成的严重风险,并成立专门的犯罪滥用团队,与国家安全部门深化合作,以应对潜在的安全威胁。

🌍美国方面,AI安全研究所面临被解散的风险,显示出全球范围内对AI安全与发展的优先考量正在发生变化。

The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it’s pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the “AI Security Institute.” With that, it will shift from primarily exploring areas like existential risk and bias in Large Language Models, to a focus on cybersecurity, specifically “strengthening protections against the risks AI poses to national security and crime.”

Alongside this, the government also announced a new partnership with Anthropic. No firm services announced but MOU indicates the two will “explore” using Anthropic’s AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modelling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks.

“AI has the potential to transform how governments serve their citizens,” Anthropic co-founder and CEO Dario Amodei said in a statement. “We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents.”

Anthropic is the only company being announced today — coinciding with a week of AI activities in Munich and Paris — but it’s not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the Secretary of State for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.) 

The government’s switch-up of the AI Safety Institute — launched just over a year ago with a lot of fanfare — to AI Security shouldn’t come as too much of a surprise. 

When the newly-installed Labour government announced its AI-heavy Plan for Change in January,  it was notable that the words  “safety,” “harm,” “existential,” and “threat” did not appear at all in the document. 

That was not an oversight. The government’s plan is to kickstart investment in a more modernized economy, using technology and specifically AI to do that. It wants to work more closely with Big Tech, and it also wants to build its own homegrown big techs. The main messages it’s been promoting have development, AI, and more development. Civil Servants will have their own AI assistant called “Humphrey,” and they’re being encouraged to share data and use AI in other areas to speed up how they work. Consumers will be getting digital wallets for their government documents, and chatbots. 

So have AI safety issues been resolved? Not exactly, but the message seems to be that they can’t be considered at the expense of progress.

The government claimed that despite the name change, the song will remain the same.

“The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change,” Kyle said in a statement. “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.”

“The Institute’s focus from the start has been on security and we’ve built a team of scientists focused on evaluating serious risks to the public,” added Ian Hogarth, who remains the chair of the institute. “Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.“

Further afield, priorities definitely appear to have changed around the importance of “AI Safety”. The biggest risk the AI Safety Institute in the U.S. is contemplating right now, is that it’s going to be dismantled. U.S. Vice President J.D. Vance telegraphed as much just earlier this week during his speech in Paris.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 英国政府 Anthropic 人工智能战略
相关文章