TechCrunch News 02月02日
AI systems with ‘unacceptable risk’ are now banned in the EU
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧盟AI法案的首个合规期限已于2月2日生效,标志着欧盟对人工智能监管迈出重要一步。该法案旨在禁止被认为具有“不可接受风险”或危害的AI系统,并对不同风险级别的AI应用进行监管。其中,高风险AI应用将面临严格监管,而不可接受风险的应用则被完全禁止。该法案涵盖了从消费者应用到物理环境等多种AI使用场景,并对违反规定的公司处以高额罚款。尽管部分公司已签署自愿协议,但合规的挑战依然存在,尤其是在与其他法律框架的交互方面。预计2025年初将发布更多指南,以进一步明确合规要求。

🚫 欧盟AI法案禁止的AI应用包括:用于社会评分、潜意识操纵决策、利用弱势群体、基于外貌预测犯罪、推断性取向、在公共场所实时收集生物识别数据、推断工作或学校情绪、以及通过网络抓取或监控摄像头创建面部识别数据库。

⚖️ 该法案根据风险级别对AI应用进行分类:最低风险的应用不受监管,有限风险的应用受轻度监管,高风险的应用受严格监管,而不可接受风险的应用则被禁止。例如,医疗保健建议的AI属于高风险,而客户服务聊天机器人属于有限风险。

💰 违反AI法案的公司将面临高达3500万欧元或年收入7%的罚款,具体金额以较高者为准。尽管2月2日是合规的正式期限,但罚款将在后续生效,具体时间取决于主管机关的确定。

🤝 欧盟AI公约是一项自愿协议,鼓励公司提前遵守AI法案的原则。包括亚马逊、谷歌和OpenAI在内的100多家公司签署了该协议,承诺识别其高风险的AI系统。然而,Meta和苹果等科技巨头并未签署该协议。

⚠️ AI法案的实施还面临一些挑战,例如与其他法律框架(如GDPR)的交互,以及如何明确合规指南、标准和行为准则。预计2025年初将发布更多指南,以解决这些问题。

As of Sunday in the European Union, the bloc’s regulators can ban the use of AI systems they deem to pose “unacceptable risk” or harm.

February 2 is the first compliance deadline for the EU’s AI Act, the comprehensive AI regulatory framework that the European Parliament finally approved last March after years of development. The act officially went into force August 1; what’s now following is the first of the compliance deadlines.

The specifics are set out in Article 5, but broadly, the Act is designed to cover a myriad of use cases where AI might appear and interact with individuals, from consumer applications through to physical environments.

Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.

Some of the unacceptable activities include:

  • AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
  • AI that manipulates a person’s decisions subliminally or deceptively.
  • AI that exploits vulnerabilities like age, disability, or socioeconomic status.
  • AI that attempts to predict people committing crimes based on their appearance.
  • AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
  • AI that collects “real time” biometric data in public places for the purposes of law enforcement.
  • AI that tries to infer people’s emotions at work or school.
  • AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater.

The fines won’t kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch.

“Organizations are expected to be fully compliant by February 2, but … the next big deadline that companies need to be aware of is in August,” Sumroy said. “By then, we’ll know who the competent authorities are, and the fines and enforcement provisions will take effect.”

The February 2 deadline is in some ways a formality.

Last September, over 100 companies signed the EU AI Pact, a voluntary pledge to start applying the principles of the AI Act ahead of its entry into application. As part of the Pact, signatories — which included Amazon, Google, and OpenAI — committed to identifying AI systems likely to be categorized as high risk under the AI Act.

Some tech giants, notably Meta and Apple, skipped the Pact. French AI startup Mistral, one of the AI Act’s harshest critics, also opted not to sign.

That isn’t to suggest that Apple, Meta, Mistral, or others who didn’t agree to the Pact won’t meet their obligations — including the ban on unacceptably risky systems. Sumroy points out that, given the nature of the prohibited use cases laid out, most companies won’t be engaging in those practices anyway.

“For organizations, a key concern around the EU AI Act is whether clear guidelines, standards, and codes of conduct will arrive in time — and crucially, whether they will provide organizations with clarity on compliance,” Sumroy said. “However, the working groups are, so far, meeting their deadlines on the code of conduct for … developers.”

There are exceptions to several of the AI Act’s prohibitions.

For example, the Act permits law enforcement to use certain systems that collect biometrics in public places if those systems help perform a “targeted search” for, say, an abduction victim, or to help prevent a “specific, substantial, and imminent” threat to life. This exemption requires authorization from the appropriate governing body, and the Act stresses that law enforcement can’t make a decision that “produces an adverse legal effect” on a person solely based on these systems’ outputs.

The Act also carves out exceptions for systems that infer emotions in workplaces and schools where there’s a “medical or safety” justification, like systems designed for therapeutic use.

The European Commission, the executive branch of the EU, said that it would release additional guidelines in “early 2025,” following a consultation with stakeholders in November. However, those guidelines have yet to be published.

Sumroy said it’s also unclear how other laws on the books might interact with the AI Act’s prohibitions and related provisions. Clarity may not arrive until later in the year, as the enforcement window approaches.

“It’s important for organizations to remember that AI regulation doesn’t exist in isolation,” Sumroy said. “Other legal frameworks, such as GDPR, NIS2, and DORA, will interact with the AI Act, creating potential challenges — particularly around overlapping incident notification requirements. Understanding how these laws fit together will be just as crucial as understanding the AI Act itself.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

欧盟AI法案 人工智能监管 高风险AI 合规 生物识别数据
相关文章