The Verge - Artificial Intelligences 06月06日 02:42
Anthropic launches new Claude service for military and intelligence use
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Anthropic推出了专为美国国防和情报机构设计的AI产品Claude Gov。这些模型在政府使用方面具有更宽松的限制,并经过训练以更好地分析机密信息。Claude Gov旨在满足政府的独特需求,如威胁评估和情报分析。尽管经过了严格的安全测试,但这些模型在处理机密信息时会减少拒绝,并且对国防和情报领域的文档和背景有更深入的理解。Anthropic此举紧随OpenAI的ChatGPT Gov之后,反映了AI公司寻求与政府机构合作的趋势。

🛡️Claude Gov专为美国政府机构设计,用于国防和情报分析,拥有更宽松的限制。

📝Claude Gov模型在处理机密信息时,减少了拒绝,能够更好地理解国防和情报相关文档。

⚠️Anthropic的使用政策禁止创建或促进非法武器或物品的交换,但允许根据政府机构的任务和法律授权调整使用限制。

🤝Anthropic的举措紧随OpenAI的ChatGPT Gov之后,反映了AI公司加强与政府机构合作的趋势。

Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.

The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.

Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. 

Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. 

Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.

Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” 

At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.”   

Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.

When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. 

Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Anthropic Claude Gov AI 政府机构
相关文章