MarkTechPost@AI 2024年08月11日
World’s First Major Artificial Intelligence AI Law Enters into Force in EU: Here’s What It Means for Tech Giants
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧洲人工智能法案于2024年8月1日生效,是全球人工智能监管的重要里程碑,旨在建立统一监管框架,促进创新并降低风险。

📄该法案由欧盟委员会于2021年4月提出,历经谈判后于2023年12月定稿。其目标是为欧盟内的人工智能建立明确统一的监管框架,采用前瞻性定义和风险分类方法进行监管。

🎯法案根据风险将人工智能系统分类,低风险系统如垃圾邮件过滤器和视频游戏较安全,无强制规定;中风险系统如聊天机器人和AI生成内容需明确告知用户;高风险系统如医疗AI工具等须满足严格标准并有持续人类监督;部分AI系统被禁止。

🌍法案适用范围广泛,包括各行业的AI活动,且具有域外效力,全球科技巨头和开发者需满足其合规要求。同时,法案对关键利益相关者、豁免和特殊情况、监管框架、违规处罚等方面都有详细规定。

💡法案对科技巨头和创新有重要影响,虽增加了IT公司成本,但也能促进创新,建立可控测试环境,强调人权和基本价值观,有望带来长期优势,各欧盟国家需执行该法案。

The European Artificial Intelligence Act came into force on August 1, 2024. It is a significant milestone in the global regulation of artificial intelligence all over the world. It is the world’s first comprehensive milestone in terms of regulation of AI and reflects EU’s ambitions to establish itself as a leader in safe and trustworthy AI development

The Genesis and Objectives of the AI Act

The Act was first proposed by the EU Commission in April 2021 in the midst of growing concerns about the risks posed by AI systems. There were extensive negotiations that took place, leading to several agreements and disagreements and ultimately, the EU Parliament and the Council came to a finalization in December 2023. 

The legislation was crafted with the primary goal of establishing a clear and uniform regulatory framework for AI within the EU, thereby fostering an environment conducive to innovation while mitigating the risks associated with AI technologies. The underlying philosophy of the ACT is to adopt a forward-looking definition of AI and a risk-based approach to regulation

Risk-Based Classification and Obligations

The European AI Act classifies AI systems based on the level of risk they pose:

    Low-Risk AI: These systems, like spam filters and video games, are considered safe and don’t have mandatory regulations. Developers can choose to follow voluntary guidelines for transparency.Moderate-Risk AI: This category includes systems like chatbots and AI-generated content, which must clearly inform users they’re interacting with AI. Content like deep fakes should be labeled to show it’s artificially made.High-Risk AI: These include critical applications like medical AI tools or recruitment software. They must meet strict standards for accuracy, security, and data quality, with ongoing human oversight. There are also special environments called regulatory sandboxes to help safely develop these technologies.Banned AI: Some AI systems are outright prohibited due to the unacceptable risks they pose, like those used for government social scoring or AI toys that could encourage unsafe behavior in children. Certain biometric systems, like those for emotion recognition at work, are also banned unless narrowly exempted.

Definition Scope and Applicability 

Broad Scope and Horizontal Application

The Act is quite expansive in nature, and it applies horizontally to AI activities across various sectors. The scope is designed and determined to cover a wide range of AI systems, from high-risk models to general-purpose AI, to ensure that the deployment and further development of AI adheres to stringent standards and rules.

Extraterritorial Scope and Global Implications

One of the most significant and unique characteristics of the Act is its extraterritorial scope, as the law doesn’t only apply to EU-based organizations but also to non-EU entities if their AI systems are used within the EU. So essentially the tech giants and AI developers all across the world have to ensure that they meet the compliance requirements of the Act to ensure their services and products are accessed by EU users.

Key Stakeholders: Providers and Deployers 

In the framework of the AI Act, “providers” are the ones who create AI systems, while “deployers” are those who implement these systems in real-world scenarios. Although their roles differ, deployers can sometimes become providers, especially if they make substantial changes to an AI system. This interaction between providers and deployers underscores the importance of having clear rules and solid compliance strategies.

Exemptions and Special Cases 

The AI Act does allow for certain exceptions, such as AI systems used for military, defense, and national security, or those developed strictly for scientific research. Additionally, AI employed for personal, non-commercial use is exempt, as are open-source AI systems unless they fall under high-risk or transparency-required categories. These exemptions ensure the Act focuses on regulating AI with significant societal impact while allowing room for innovation in less critical areas.

Regulatory Landscape: Multiple Authority and Coordination

The AI Act is enforced by a multi-layered regulatory framework that includes numerous authorities in each EU nation, as well as the European AI Office and AI Board at the EU level. This structure is designed to guarantee that the AI Act is applied consistently across the EU, with the AI Office playing a key role in coordinating enforcement and providing guidance.

Significant Penalties for Noncompliance

The AI Act provides significant penalties for noncompliance, including fines of up to 7% of worldwide annual revenue or €35 million, whichever is higher, for infringing forbidden AI activities. Other violations, such as failing to fulfill high-risk AI system criteria, result in lower fines. These steep penalties highlight the EU’s commitment to enforcing the AI Act and preventing unethical AI practices.

Prohibited AI Practices: Protecting EU Values 

The AI Act expressly prohibits some AI techniques that are harmful, exploitative, or violate EU principles. These include AI systems that employ subliminal or manipulative approaches, exploit weaknesses, or conduct social credit ratings. The Act also restricts AI usage in areas such as predictive policing and emotion identification, notably in workplaces and educational settings. These prohibitions demonstrate the EU’s commitment to protecting basic rights and ensuring AI development follows ethical norms.

Responsibilities of High-Risk AI System Deployers

Those that use high-risk AI systems must follow tight restrictions, such as adhering to the provider’s instructions, assuring human oversight, and performing frequent monitoring and reviews. They must also maintain records and cooperate with regulatory agencies. Additionally, deployers must conduct data protection and basic rights impact assessments when needed, emphasizing the significance of responsibility in AI deployment.

Governance and Enforcement: The Role of the European AI Office and AI Board 

The European AI Office, which is part of the European Commission, is in charge of enforcing regulations governing general-purpose AI models and ensuring that the AI Act is applied consistently throughout member states. The AI Board, which comprises members from each member state, will help to guarantee consistent implementation and give direction. These entities will work together to ensure regulatory uniformity and solve new difficulties in AI governance.

General-Purpose AI Models: Special Considerations 

General-purpose AI (GPAI) models, which can handle various tasks, must meet specific requirements under the AI Act. Providers of these models need to publish detailed summaries of the data used for training, keep technical documentation, and comply with EU copyright laws. Models that pose systemic risks have additional obligations, such as notifying the European Commission, conducting adversarial testing, and ensuring cybersecurity.

Implications for Tech Giants and Innovation

The AI Act is a significant move for technology businesses operating in the European Union. With this new legislation, organizations that design and employ AI, particularly those with high-risk systems, must adhere to stringent requirements of openness, data integrity, and human monitoring. These new laws will most certainly raise the expenses for IT companies, but the prospect of fines—up to 7% of their worldwide annual turnover for disobeying the rules, particularly when it comes to restricted AI applications—demonstrates how serious the EU is about this.

Despite these obstacles, the AI Act has the potential to boost innovation. By establishing explicit criteria, the Act levels the playing field for all EU AI developers, fostering competitiveness and the development of dependable AI technology.

The creation of controlled testing environments, also known as regulatory sandboxes, is specifically intended to assist enterprises in securely developing high-risk AI systems by allowing them to explore and enhance their AI products under supervision.

Furthermore, by emphasizing human rights and basic values, the EU is establishing itself as a pioneer in ethical AI research. The objective is to increase public trust in AI, which is critical for its widespread adoption and incorporation into daily life. This technique is predicted to yield considerable long-term advantages, including enhanced public services, healthcare, and manufacturing efficiency.

Enforcement and Next Steps

The obligation to execute the AI Act lies with individual national authorities in each EU country, with market surveillance beginning on August 2, 2025. The European Commission’s AI Office will play an important role in implementing the AI Act, particularly for general-purpose AI models. The AI Office will be supported by three advisory groups: the European Artificial Intelligence Board, a panel of independent scientific experts, and an advisory forum comprised of diverse stakeholders.

Noncompliance with the AI Act will result in significant fines, which may vary based on the severity of the infraction. To prepare for the Act’s full implementation, the Commission has introduced the AI Pact, an initiative that encourages AI developers to start adopting crucial obligations before they become legally required. This interim measure aims to ease the transition before most of the Act’s provisions take effect on August 2, 2026.

Conclusion

The European Artificial Intelligence Act represents a landmark in the global regulation of AI, setting a precedent for how governments can balance the promotion of innovation with the protection of fundamental rights. For tech giants operating within the EU, the AI Act introduces both challenges and opportunities, requiring them to navigate a complex regulatory landscape while continuing to innovate.


Sources:

The post World’s First Major Artificial Intelligence AI Law Enters into Force in EU: Here’s What It Means for Tech Giants appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

欧洲人工智能法案 AI监管 风险分类 创新与挑战
相关文章