The implementation of the EU’s AI General-Purpose Code of Practice has exposed deep divisions among major technology companies. Microsoft has signalled its intention to sign the European Union’s voluntary AI compliance framework while Meta flatly refuses participation, calling the guidelines regulatory overreach that will stifle innovation.
Microsoft President Brad Smith told Reuters on Friday, “I think it’s likely we will sign. We need to read the documents.”. Smith emphasised his company’s collaborative approach, stating, “Our goal is to find a way to be supportive, and at the same time, one of the things we welcome is the direct engagement by the AI Office with industry.”
In contrast, Meta’s Chief Global Affairs Officer, Joel Kaplan, announced on LinkedIn that “Meta won’t be signing it. The code introduces several legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”
Kaplan argued that “Europe is heading down the wrong path on AI” and warned the EU AI code would “throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.”
Early adopters vs. holdouts
The technology sector’s fractured response highlights different strategies for managing European regulatory compliance. OpenAI and Mistral have signed the Code, positioning themselves as early adopters of the voluntary framework.
OpenAI announced its commitment, stating, “Signing the Code reflects our commitment to providing capable, accessible and secure AI models for Europeans to fully participate in the economic and societal benefits of the Intelligence Age.”
OpenAI joins the EU code of practice for general-purpose AI models, the second signature of a leading AI company after Mistral, according to industry observers tracking the voluntary commitments.
More than 40 of Europe’s largest businesses signed a letter earlier this month, asking the European Commission to halt the implementation of the AI Act, including companies like ASML Holding and Airbus that called for a two-year delay.
Code requirements and timeline
The code of practice, was published on July 10 by the European Commission, and aims to provide legal certainty for companies developing general-purpose AI models ahead of mandatory enforcement beginning August 2, 2025.
The voluntary tool was developed by 13 independent experts, with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rights-holders, and civil society organisations.
The EU AI code establishes requirements in three areas. Transparency obligations require providers to maintain technical model and dataset documentation, while copyright compliance mandates clear internal policies outlining how training data is obtained and used under EU copyright rules.
For the most advanced models, safety and security obligations apply under the category, “GPAI with Systemic Risk” (GPAISR), which covers the most advanced models, like OpenAI’s o3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro.
Signatories will have to publish summaries of the content used to train their general-purpose AI models and put in place a policy to comply with EU copyright law. The framework requires companies to document training data sources, implement robust risk assessments, and establish governance frameworks for managing potential AI system threats.
Enforcement and penalties
The penalties for non-compliance are substantial, including up to €35 million or 7% of global annual turnover (the greater of either). In particular, for providers of GPAI models, the EC may impose a fine of up to €15 million or 3% of the worldwide annual turnover.
The Commission has indicated that if providers adhere to an approved Code of Practice, the AI Office and national regulators will treat that as a simplified compliance path, focusing enforcement on checking that the Code’s commitments are met, rather than conducting audits of every AI system. This creates incentives for early adoption among companies seeking regulatory predictability.
The EU AI code represents part of the broaderAI Act framework. Under the AI Act, obligations for GPAI models, detailed in Articles 50 – 55, are enforceable twelve months after the Act enters into force (2 August 2025). Providers of GPAI models that have been placed on the market before this date need to be compliant with the AI Act by 2 August 2027.
Industry impact and global implications
The different responses suggest technology companies are adopting fundamentally different strategies for managing regulatory relationships in global markets. Microsoft’s cooperative stance contrasts sharply with Meta’s confrontational approach, potentially setting precedents for how major AI developers engage with international regulation.
Despite mounting opposition, the European Commission has refused to delay. The EU’s Internal Market Commissioner Thierry Breton has insisted that the framework will proceed as scheduled, saying the AI Act is essential for consumer safety and trust in emerging technologies.
The EU AI code’s current voluntary nature during initial phases provides companies with opportunities to influence regulatory development through participation. However, mandatory enforcement beginning in August 2025 ensures eventual compliance regardless of voluntary code adoption.
For companies operating in multiple jurisdictions, the EU framework may influence global AI governance standards. The framework aligns with broader global AI governance developments, including the G7 Hiroshima AI Process and various national AI strategies, potentially establishing European approaches as international benchmarks.
Looking ahead
In the immediate term, the Code’s content will be reviewed by EU authorities: the European Commission and Member States are assessing the Code’s adequacy and are expected to formally endorse it, with a final decision planned by 2 August 2025.
The regulatory framework creates significant implications for AI development globally, as companies must balance innovation objectives with compliance obligations in multiple jurisdictions. The different company responses to the voluntary code foreshadow potential compliance challenges as mandatory requirements take effect.
See also: Navigating the EU AI Act: Implications for UK businesses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Tech giants split on EU AI code as compliance deadline looms appeared first on AI News.