Unite.AI 03月15日 04:59
OpenAI, Anthropic, and Google Urge Action as US AI Lead Diminishes
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国顶尖人工智能公司OpenAI、Anthropic和谷歌向联邦政府发出警告,由于中国Deepseek R1等模型的崛起,美国在人工智能领域的技术领先优势正在缩小。这些公司在提交给美国政府的文件中,强调了国家安全风险、经济竞争力以及战略监管框架的必要性,以应对日益激烈的全球竞争和中国在人工智能领域的国家补贴发展。他们特别关注Deepseek R1模型的出现,认为这表明技术差距正在迅速缩小,并呼吁政府采取行动,维护美国在人工智能领域的领导地位。

🇨🇳Deepseek R1的出现引发美国AI巨头担忧,OpenAI认为该模型是“国家补贴、国家控制且免费提供”,对美国利益构成威胁,Anthropic则发现Deepseek R1在回答生物武器相关问题时过于配合,与美国模型的安全措施形成对比。

🛡️国家安全方面,OpenAI担忧中国政府可能迫使Deepseek操纵模型,Anthropic关注AI在生物安全方面的风险,并指出Nvidia H20芯片出口存在漏洞,Google则呼吁在保护国家安全和促进经济竞争力之间取得平衡。

💡经济竞争力方面,Anthropic警告称,到2027年,训练一个前沿AI模型将需要大约5吉瓦的电力,建议到2027年专门为AI行业增加50吉瓦的电力,OpenAI强调“民主AI”的重要性,Google则呼吁投资AI、加速政府采用AI,并推动国际创新。

⚖️监管建议方面,OpenAI反对各州单独监管AI,提议联邦政府与私营部门建立自愿合作关系,Anthropic强调加强政府的评估能力,Google则呼吁建立“促进创新的联邦框架”,并针对不同行业制定风险导向的AI治理和标准。

Leading US artificial intelligence companies OpenAI, Anthropic, and Google have warned the federal government that America's technological lead in AI is “not wide and is narrowing” as Chinese models like Deepseek R1 demonstrate increasing capabilities, according to documents submitted to the US government in response to a request for information on developing an AI Action Plan.

These recent submissions from March 2025 highlight urgent concerns about national security risks, economic competitiveness, and the need for strategic regulatory frameworks to maintain US leadership in AI development amid growing global competition and China's state-subsidized advancement in the field. Anthropic and Google submitted their responses on March 6, 2025, while OpenAI's submission followed on March 13, 2025.

The China Challenge and Deepseek R1

The emergence of China's Deepseek R1 model has triggered significant concern among major US AI developers, who view it not as superior to American technology but as compelling evidence that the technological gap is quickly closing.

OpenAI explicitly warns that “Deepseek shows that our lead is not wide and is narrowing,” characterizing the model as “simultaneously state-subsidized, state-controlled, and freely available” – a combination they consider particularly threatening to US interests and global AI development.

According to OpenAI's analysis, Deepseek poses risks similar to those associated with Chinese telecommunications giant Huawei. “As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm,” OpenAI stated in its submission.

The company further raised concerns about data privacy and security, noting that Chinese regulations could require Deepseek to share user data with the government. This could enable the Chinese Communist Party to develop more advanced AI systems aligned with state interests while compromising individual privacy.

Anthropic's assessment focuses heavily on biosecurity implications. Their evaluation revealed that Deepseek R1 “complied with answering most biological weaponization questions, even when formulated with a clearly malicious intent.” This willingness to provide potentially dangerous information stands in contrast to safety measures implemented by leading US models.

“While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing,” Anthropic echoed in its own submission, reinforcing the urgent tone of the warnings.

Both companies frame the competition in ideological terms, with OpenAI describing a contest between American-led “democratic AI” and Chinese “autocratic, authoritarian AI.” They suggest that Deepseek's reported willingness to generate instructions for “illicit and harmful activities such as identity fraud and intellectual property theft” reflects fundamentally different ethical approaches to AI development between the two nations.

The emergence of Deepseek R1 is undoubtedly a significant milestone in the global AI race, demonstrating China's growing capabilities despite US export controls on advanced semiconductors and highlighting the urgency of coordinated government action to maintain American leadership in the field.

National Security Implications

The submissions from all three companies emphasize significant national security concerns arising from advanced AI models, though they approach these risks from different angles.

OpenAI's warnings focus heavily on the potential for CCP influence over Chinese AI models like Deepseek. The company stresses that Chinese regulations could compel Deepseek to “compromise critical infrastructure and sensitive applications” and require user data to be shared with the government. This data sharing could enable the development of more sophisticated AI systems aligned with China's state interests, creating both immediate privacy issues and long-term security threats.

Anthropic's concerns center on biosecurity risks posed by advanced AI capabilities, regardless of their country of origin. In a particularly alarming disclosure, Anthropic revealed that “Our most recent system, Claude 3.7 Sonnet, demonstrates concerning improvements in its capacity to support aspects of biological weapons development.” This candid admission underscores the dual-use nature of advanced AI systems and the need for robust safeguards.

Anthropic also identified what they describe as a “regulatory gap in US chip restrictions” related to Nvidia's H20 chips. While these chips meet the reduced performance requirements for Chinese export, they “excel at text generation (‘sampling')—a fundamental component of advanced reinforcement learning methodologies critical to current frontier model capability advancements.” Anthropic urged “immediate regulatory action” to address this potential vulnerability in current export control frameworks.

Google, while acknowledging AI security risks, advocates for a more balanced approach to export controls. The company cautions that current AI export rules “may undermine economic competitiveness goals…by imposing disproportionate burdens on U.S. cloud service providers.” Instead, Google recommends “balanced export controls that protect national security while enabling U.S. exports and global business operations.”

All three companies emphasize the need for enhanced government evaluation capabilities. Anthropic specifically calls for building “the federal government's capacity to test and evaluate powerful AI models for national security capabilities” to better understand potential misuses by adversaries. This would involve preserving and strengthening the AI Safety Institute, directing NIST to develop security evaluations, and assembling teams of interdisciplinary experts.

Comparison Table: OpenAI, Anthropic, Google

Area of Focus OpenAIAnthropicGoogle
Primary ConcernPolitical and economic threats from state-controlled AIBiosecurity risks from advanced modelsMaintaining innovation while balancing security
View on Deepseek R1“State-subsidized, state-controlled, and freely available” with Huawei-like risksWilling to answer “biological weaponization questions” with malicious intentLess specific focus on Deepseek, more on broader competition
National Security PriorityCCP influence and data security risksBiosecurity threats and chip export loopholesBalanced export controls that don't burden US providers
Regulatory ApproachVoluntary partnership with federal government; single point of contactEnhanced government testing capacity; hardened export controls“Pro-innovation federal framework”; sector-specific governance
Infrastructure FocusGovernment adoption of frontier AI toolsEnergy expansion (50GW by 2027) for AI developmentCoordinated action on energy, permitting reform
Distinctive RecommendationTiered export control framework promoting “democratic AI”Immediate regulatory action on Nvidia H20 chips exported to ChinaIndustry access to openly available data for fair learning

Economic Competitiveness Strategies

Infrastructure requirements, particularly energy needs, emerge as a critical factor in maintaining U.S. AI leadership. Anthropic warned that “by 2027, training a single frontier AI model will require networked computing clusters drawing approximately five gigawatts of power.” They proposed an ambitious national target to build 50 additional gigawatts of power dedicated specifically to the AI industry by 2027, alongside measures to streamline permitting and expedite transmission line approvals.

OpenAI once again frames the competition as an ideological contest between “democratic AI” and “autocratic, authoritarian AI” built by the CCP. Their vision for “democratic AI” emphasizes “a free market promoting free and fair competition” and “freedom for developers and users to work with and direct our tools as they see fit,” within appropriate safety guardrails.

All three companies offered detailed recommendations for maintaining U.S. leadership. Anthropic stressed the importance of “strengthening American economic competitiveness” and ensuring that “AI-driven economic benefits are widely shared across society.” They advocated for “securing and scaling up U.S. energy supply” as a critical prerequisite for keeping AI development within American borders, warning that energy constraints could force developers overseas.

Google called for decisive actions to “supercharge U.S. AI development,” focusing on three key areas: investment in AI, acceleration of government AI adoption, and promotion of pro-innovation approaches internationally. The company emphasized the need for “coordinated federal, state, local, and industry action on policies like transmission and permitting reform to address surging energy needs” alongside “balanced export controls” and “continued funding for foundational AI research and development.”

Google's submission particularly highlighted the need for a “pro-innovation federal framework for AI” that would prevent a patchwork of state regulations while ensuring industry access to openly available data for training models. Their approach emphasizes “focused, sector-specific, and risk-based AI governance and standards” rather than broad regulation.

Regulatory Recommendations

A unified federal approach to AI regulation emerged as a consistent theme across all submissions. OpenAI warned against “regulatory arbitrage being created by individual American states” and proposed a “holistic approach that enables voluntary partnership between the federal government and the private sector.” Their framework envisions oversight by the Department of Commerce, potentially through a reimagined US AI Safety Institute, providing a single point of contact for AI companies to engage with the government on security risks.

On export controls, OpenAI advocated for a tiered framework designed to promote American AI adoption in countries aligned with democratic values while restricting access for China and its allies. Anthropic similarly called for “hardening export controls to widen the U.S. AI lead” and “dramatically improve the security of U.S. frontier labs” through enhanced collaboration with intelligence agencies.

Copyright and intellectual property considerations featured prominently in both OpenAI and Google's recommendations. OpenAI stressed the importance of maintaining fair use principles to enable AI models to learn from copyrighted material without undermining the commercial value of existing works. They warned that overly restrictive copyright rules could disadvantage U.S. AI firms compared to Chinese competitors. Google echoed this view, advocating for “balanced copyright rules, such as fair use and text-and-data mining exceptions” which they described as “critical to enabling AI systems to learn from prior knowledge and publicly available data.”

All three companies emphasized the need for accelerated government adoption of AI technologies. OpenAI called for an “ambitious government adoption strategy” to modernize federal processes and safely deploy frontier AI tools. They specifically recommended removing obstacles to AI adoption, including outdated accreditation processes like FedRAMP, restrictive testing authorities, and inflexible procurement pathways. Anthropic similarly advocated for “promoting rapid AI procurement across the federal government” to revolutionize operations and enhance national security.

Google suggested “streamlining outdated accreditation, authorization, and procurement practices” within the government to accelerate AI adoption. They emphasized the importance of effective public procurement rules and improved interoperability in government cloud solutions to facilitate innovation.

The comprehensive submissions from these leading AI companies present a clear message: maintaining American leadership in artificial intelligence requires coordinated federal action across multiple fronts – from infrastructure development and regulatory frameworks to national security protections and government modernization – particularly as competition from China intensifies.

The post OpenAI, Anthropic, and Google Urge Action as US AI Lead Diminishes appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 Deepseek R1 国家安全 经济竞争力 AI监管
相关文章