Unite.AI 2024年11月26日
EU’s New AI Code of Conduct Set to Impact Regulation
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧盟委员会推出了一套全新的AI行为准则,旨在规范AI公司的运营,特别是那些拥有强大算力的大模型。该准则要求AI公司在训练模型前进行申报,并接受外部测试,以确保AI系统的安全性和可靠性。此外,准则还强调了版权保护,要求AI公司尊重网站的robots.txt协议,避免使用盗版内容训练模型。这些新规将对AI行业产生深远影响,迫使公司重新思考AI开发流程,并可能影响全球AI发展趋势。

🤔 **欧盟委员会推出AI行为准则,规范AI公司运营,尤其针对“系统性风险”AI模型。**该准则将影响训练算力超过10^25 FLOPs的AI模型,例如GPT-4,要求公司在训练开始前两周进行申报,并提交安全与安全框架(SSF)和安全与安全报告(SSR)。

🔍 **高风险AI模型需接受外部测试。**欧盟委员会要求独立专家和欧盟AI办公室对高风险AI模型进行审查,这对OpenAI和Google等公司来说,意味着需要开放模型供外部评估,这与以往的自我监管模式截然不同。

🛡️ **版权保护力度加强。**欧盟委员会强调版权保护,要求AI公司遵守robots.txt协议,避免使用受版权保护的内容训练模型,并避免使用盗版网站数据。

📅 **AI公司需提前规划应对措施。**欧盟AI行为准则的实施迫在眉睫,AI公司需要提前规划,将外部审计纳入开发流程,建立健全的版权合规体系和符合欧盟要求的文档框架。

🌍 **欧盟AI监管或影响全球AI发展。**欧盟的AI行为准则可能成为全球AI监管的先例,其他地区可能效仿,AI行业将迎来最大的监管变革。

The European Commission recently introduced a Code of Conduct that could change how AI companies operate. It is not just another set of guidelines but rather a complete overhaul of AI oversight that even the biggest players cannot ignore. 

What makes this different? For the first time, we are seeing concrete rules that could force companies like OpenAI and Google to open their models for external testing, a fundamental shift in how AI systems could be developed and deployed in Europe.

The New Power Players in AI Oversight

The European Commission has created a framework that specifically targets what they are calling AI systems with “systemic risk.” We are talking about models trained with more than 10^25 FLOPs of computational power – a threshold that GPT-4 has already blown past.

Companies will need to report their AI training plans two weeks before they even start. 

At the center of this new system are two key documents: the Safety and Security Framework (SSF) and the Safety and Security Report (SSR). The SSF is a comprehensive roadmap for managing AI risks, covering everything from initial risk identification to ongoing security measures. Meanwhile, the SSR serves as a detailed documentation tool for each individual model.

External Testing for High-Risk AI Models

The Commission is demanding external testing for high-risk AI models. This is not your standard internal quality check – independent experts and the EU's AI Office are getting under the hood of these systems.

The implications are big. If you are OpenAI or Google, you suddenly need to let outside experts examine your systems. The draft explicitly states that companies must “ensure sufficient independent expert testing before deployment.” That's a huge shift from the current self-regulation approach.

The question arises: Who is qualified to test these incredibly complex systems? The EU's AI Office is stepping into territory that's never been charted before. They will need experts who can understand and evaluate new AI technology while maintaining strict confidentiality about what they discover.

This external testing requirement could become mandatory across the EU through a Commission implementing act. Companies can try to demonstrate compliance through “adequate alternative means,” but nobody's quite sure what that means in practice.

Copyright Protection Gets Serious

The EU is also getting serious about copyright. They are forcing AI providers to create clear policies about how they handle intellectual property.

The Commission is backing the robots.txt standard – a simple file that tells web crawlers where they can and can't go.  If a website says “no” through robots.txt, AI companies cannot just ignore it and train on that content anyway. Search engines cannot penalize sites for using these exclusions. It's a power move that puts content creators back in the driver's seat.

AI companies are also going to have to actively avoid piracy websites when they're gathering training data. The EU's even pointing them to their “Counterfeit and Piracy Watch List” as a starting point. 

What This Means for the Future

The EU is creating an entirely new playing field for AI development. These requirements are going to affect everything from how companies plan their AI projects to how they gather their training data.

Every major AI company is now facing a choice. They need to either:

The timeline here matters too. This is not some far-off future regulation – the Commission is moving fast. They managed to get around 1,000 stakeholders divided into four working groups, all hammering out the details of how this is going to work.

For companies building AI systems, the days of “move fast and figure out the rules later” could be coming to an end. They will need to start thinking about these requirements now, not when they become mandatory. That means:

The real impact of these regulations will unfold over the coming months. While some companies may seek workarounds, others will integrate these requirements into their development processes. The EU's framework could influence how AI development happens globally, especially if other regions follow with similar oversight measures. As these rules move from draft to implementation, the AI industry faces its biggest regulatory shift yet.

The post EU’s New AI Code of Conduct Set to Impact Regulation appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI监管 欧盟 AI行为准则 版权保护 外部测试
相关文章