AI News 13小时前
Tech giants split on EU AI code as compliance deadline looms
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧盟推出的通用人工智能(AI)行为准则引发了科技巨头之间的深刻分歧。微软表示倾向于签署,强调合作与对话,而Meta则明确拒绝参与,认为准则存在法律不确定性,且超出AI法案范畴,可能扼杀创新。OpenAI和Mistral已成为首批签署的先行者,表达了对为欧洲用户提供安全AI模型的承诺。该准则旨在为通用AI模型提供法律确定性,并包含透明度、版权合规及针对高风险模型的安全要求。违规可能面临巨额罚款。此举不仅影响欧洲AI发展,也可能对全球AI治理标准产生深远影响。

⚖️ 微软倾向签署欧盟AI行为准则,强调与AI办公室的直接互动和支持性合作,以期在AI发展中找到合作之道。

🚫 Meta拒绝签署欧盟AI行为准则,认为其引入法律不确定性,且措施超越AI法案范畴,将阻碍前沿AI模型在欧洲的发展和部署。

🚀 OpenAI和Mistral已成为首批签署该准则的AI公司,表明其致力于提供安全、易于访问的AI模型,并让欧洲用户受益于智能时代。

📜 欧盟AI行为准则要求模型提供者承担透明度义务,明确训练数据的来源和使用,并对具有系统性风险的最先进模型实施安全与安保措施。

💰 违反欧盟AI行为准则将面临严厉处罚,最高可达3500万欧元或公司全球年营业额的7%,为AI模型提供商设定了明确的合规门槛。

The implementation of the EU’s AI General-Purpose Code of Practice has exposed deep divisions among major technology companies. Microsoft has signalled its intention to sign the European Union’s voluntary AI compliance framework while Meta flatly refuses participation, calling the guidelines regulatory overreach that will stifle innovation.

Microsoft President Brad Smith told Reuters on Friday, “I think it’s likely we will sign. We need to read the documents.”. Smith emphasised his company’s collaborative approach, stating, “Our goal is to find a way to be supportive, and at the same time, one of the things we welcome is the direct engagement by the AI Office with industry.”

In contrast, Meta’s Chief Global Affairs Officer, Joel Kaplan, announced on LinkedIn that “Meta won’t be signing it. The code introduces several legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Kaplan argued that “Europe is heading down the wrong path on AI” and warned the EU AI code would “throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.”

Early adopters vs. holdouts

The technology sector’s fractured response highlights different strategies for managing European regulatory compliance. OpenAI and Mistral have signed the Code, positioning themselves as early adopters of the voluntary framework.

OpenAI announced its commitment, stating, “Signing the Code reflects our commitment to providing capable, accessible and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age.”

OpenAI joins the EU code of practice for general-purpose AI models, the second signature of a leading AI company after Mistral, according to industry observers tracking the voluntary commitments.

More than 40 of Europe’s largest businesses signed a letter earlier this month, asking the European Commission to halt the implementation of the AI Act, including companies like ASML Holding and Airbus that called for a two-year delay.

Code requirements and timeline

The code of practice, was published on July 10 by the European Commission, and aims to provide legal certainty for companies developing general-purpose AI models ahead of mandatory enforcement beginning August 2, 2025.

The voluntary tool was developed by 13 independent experts, with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rights-holders, and civil society organisations.

The EU AI code establishes requirements in three areas. Transparency obligations require providers to maintain technical model and dataset documentation, while copyright compliance mandates clear internal policies outlining how training data is obtained and used under EU copyright rules.

For the most advanced models, safety and security obligations apply under the category, “GPAI with Systemic Risk” (GPAISR), which covers the most advanced models, like OpenAI’s o3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro.

Signatories will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. The framework requires companies to document training data sources, implement robust risk assessments, and establish governance frameworks for managing potential AI system threats.

Enforcement and penalties

The penalties for non-compliance are substantial, including up to €35 million or 7% of global annual turnover (the greater of either). In particular, for providers of GPAI models, the EC may impose a fine of up to €15 million or 3% of the worldwide annual turnover.

The Commission has indicated that if providers adhere to an approved Code of Practice, the AI Office and national regulators will treat that as a simplified compliance path, focusing enforcement on checking that the Code’s commitments are met, rather than conducting audits of every AI system. This creates incentives for early adoption among companies seeking regulatory predictability.

The EU AI code represents part of the broaderAI Act framework. Under the AI Act, obligations for GPAI models, detailed in Articles 50 – 55, are enforceable twelve months after the Act enters into force (2 August 2025). Providers of GPAI models that have been placed on the market before this date need to be compliant with the AI Act by 2 August 2027.

Industry impact and global implications

The different responses suggest technology companies are adopting fundamentally different strategies for managing regulatory relationships in global markets. Microsoft’s cooperative stance contrasts sharply with Meta’s confrontational approach, potentially setting precedents for how major AI developers engage with international regulation.

Despite mounting opposition, the European Commission has refused to delay. The EU’s Internal Market Commissioner Thierry Breton has insisted that the framework will proceed as scheduled, saying the AI Act is essential for consumer safety and trust in emerging technologies.

The EU AI code’s current voluntary nature during initial phases provides companies with opportunities to influence regulatory development through participation. However, mandatory enforcement beginning in August 2025 ensures eventual compliance regardless of voluntary code adoption.

For companies operating in multiple jurisdictions, the EU framework may influence global AI governance standards. The framework aligns with broader global AI governance developments, including the G7 Hiroshima AI Process and various national AI strategies, potentially establishing European approaches as international benchmarks.

Looking ahead

In the immediate term, the Code’s content will be reviewed by EU authorities: the European Commission and Member States are assessing the Code’s adequacy and are expected to formally endorse it, with a final decision planned by 2 August 2025.

The regulatory framework creates significant implications for AI development globally, as companies must balance innovation objectives with compliance obligations in multiple jurisdictions. The different company responses to the voluntary code foreshadow potential compliance challenges as mandatory requirements take effect.

See also: Navigating the EU AI Act: Implications for UK businesses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tech giants split on EU AI code as compliance deadline looms appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

欧盟AI法案 AI行为准则 微软 Meta OpenAI Mistral
相关文章