TechCrunch News 03月11日
EU AI Act: Latest draft Code for AI model makers tiptoes towards gentler guidance for Big AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧盟发布了通用人工智能(GPAI)模型行为准则的第三版草案,旨在帮助模型开发者理解并遵守欧盟AI法案的相关规定,避免因违规而受到制裁。该准则涵盖透明度、版权和风险缓解等关键领域,并根据之前的反馈进行了结构优化和承诺细化。尽管如此,在版权合规方面,草案中仍存在“尽最大努力”等模糊措辞,引发了对大型AI公司是否会利用这些措辞规避责任的担忧。美国政府也对欧盟的AI监管提出了批评,认为过度监管可能会扼杀创新。

🇪🇺欧盟AI法案对通用人工智能(GPAI)模型提出了透明度、版权和风险缓解等方面的要求,旨在规范AI技术的发展,确保其安全可靠。

📝 最新发布的行为准则第三版草案,旨在为GPAI模型开发者提供更清晰的合规指导,但其中在版权保护方面使用了“尽最大努力”等措辞,引发了对执行力度的担忧。

⚖️ 美国政府对欧盟的AI监管持批评态度,认为过度监管可能会阻碍创新,并警告欧洲不要“扼杀金鹅”。

🏢 法国GPAI模型制造商Mistral表示,在技术上难以完全遵守某些AI法案的规定,并正在与监管机构合作寻求解决方案。

Ahead of a May deadline to lock in guidance for providers of general purpose AI (GPAI) models on complying with provisions of the EU AI Act that apply to Big AI, a third draft of the Code of Practice was published on Tuesday. The Code has been in formulation since last year, and this draft is expected to be the last revision round before the guidelines are finalized in the coming months.

A website has also been launched with the aim of boosting the Code’s accessibility. Written feedback on the latest draft should be submitted by March 30, 2025.

The bloc’s risk-based rulebook for AI includes a sub-set of obligations that apply only to the most powerful AI model makers — covering areas such as transparency, copyright, and risk mitigation. The Code is aimed at helping GPAI model makers understand how to meet the legal obligations and avoid the risk of sanctions for non-compliance. AI Act penalties for breaches of GPAI requirements, specifically, could reach up to 3% of global annual turnover.

The latest revision of the Code is billed as having “a more streamlined structure with refined commitments and measures” compared to earlier iterations, based on feedback on the second draft that was published in December.

Further feedback, working group discussions and workshops will feed into the process of turning the third draft into final guidance. And the experts say they hope to achiever greater “clarity and coherence” in the final adopted version of the Code.

The draft is broken down into a handful of sections covering off commitments for GPAIs, along with detailed guidance for transparency and copyright measures. There is also a section on safety and security obligations which apply to the most powerful models (with so-called systemic risk, or GPAISR).

On transparency, the guidance includes an example of a model documentation form GPAIs might be expected to fill in in order to ensure that downstream deployers of their technology have access to key information to help with their own compliance.

Elsewhere, the copyright section likely remains the most immediately contentious area for Big AI.

The current draft is replete with terms like “best efforts”, “reasonable measures” and “appropriate measures” when it comes to complying with commitments such as respecting rights requirements when crawling the web to acquire data for model training, or mitigating the risk of models churning out copyright-infringing outputs.

The use of such mediated language suggests data-mining AI giants may feel they have plenty of wiggle room to carry on grabbing protected information to train their models and ask forgiveness later — but it remains to be seen whether the language gets toughened up in the final draft of the Code.

Language used in an earlier iteration of the Code — saying GPAIs should provide a single point of contact and complaint handling to make it easier for rightsholders to communicate grievances “directly and rapidly” — appears to have gone. Now, there is merely a line stating: “Signatories will designate a point of contact for communication with affected rightsholders and provide easily accessible information about it.”

The current text also suggests GPAIs may be able to refuse to act on copyright complaints by rightsholders if they “manifestly unfounded or excessive, in particular because of their repetitive character.” It suggests attempts by creatives to flip the scales by making use of AI tools to try to detect copyright issues and automate filing complaints against Big AI could result in them… simply being ignored.

When it comes to safety and security, the EU AI Act’s requirements to evaluate and mitigate systemic risks already only apply to a subset of the most powerful models (those trained using a total computing power of more than 10^25 FLOPs) — but this latest draft sees some previously recommended measures being further narrowed in response to feedback.

Unmentioned in the EU press release about the latest draft are blistering attacks on European lawmaking generally, and the bloc’s rules for AI specifically, coming out of the U.S. administration led by president Donald Trump.

At the Paris AI Action summit last month, U.S. vice president JD Vance dismissed the need to regulate to ensure AI is applied safety — Trump’s administration would instead be leaning into “AI opportunity”. And he warned Europe that overregulation could kill the golden goose.

Since then, the bloc has moved to kill off one AI safety initiative — putting the AI Liability Directive on the chopping block. EU lawmakers have also trailed an incoming “omnibus” package of simplifying reforms to existing rules that they say are aimed at reducing red tape and bureaucracy for business, with a focus on areas like sustainability reporting. But with the AI Act still in the process of being implemented, there is clearly pressure being applied to dilute requirements.

At the Mobile World Congress trade show in Barcelona earlier this month, French GPAI model maker Mistral — a particularly loud opponent of the EU AI Act during negotiations to conclude the legislation back in 2023 — with founder Arthur Mensh claimed it is having difficulties finding technological solutions to comply with some of the rules. He added that the company is “working with the regulators to make sure that this is resolved.”

While this GPAI Code is being drawn up by independent experts, the European Commission — via the AI Office which oversees enforcement and other activity related to the law — is, in parallel, producing some “clarifying” guidance that will also shape how the law applies. Including definitions for GPAIs and their responsibilities.

So look out for further guidance, “in due time”, from the AI Office — which the Commission says will “clarify … the scope of the rules” — as this could offer a pathway for nerve-losing lawmakers to respond to the U.S. lobbying to deregulate AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

欧盟AI法案 通用人工智能 行为准则 人工智能监管
相关文章