The Verge - Artificial Intelligences 2024年07月12日
The compliance countdown has started for AI companies operating in the EU
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧盟AI法案于3月正式通过,并于8月1日正式生效,该法案旨在规范欧盟境内科技公司的人工智能技术应用,包括禁止某些AI工具的使用,并对开发人员提出透明度要求。该法案将分阶段实施,并对科技公司提出了明确的合规期限。

😄 **禁止“不可接受风险”的AI应用:** 法案明确禁止一些“威胁公民权利”的AI应用,例如根据生物特征进行分类以推断性取向或宗教信仰、从互联网或监控摄像头中抓取人脸等。此外,情绪识别系统在工作场所和学校被禁止,社会评分系统也被禁止。在某些情况下,预测性警务工具的使用也被禁止。这些应用被认为具有“不可接受的风险”,科技公司必须在2025年2月2日前遵守相关规定。

😊 **制定AI应用行为准则:** 法案规定,在2025年5月2日之前,开发人员必须制定行为准则,包括法律合规标准、基准指标、关键绩效指标、具体透明度要求等。

😉 **通用AI系统需遵守版权法:** 从2025年8月起,通用AI系统(如聊天机器人)必须遵守版权法,并满足透明度要求,例如分享用于训练系统的相关数据摘要。

😎 **高风险AI系统需进行风险评估和人工监督:** 2026年8月起,该法案的规则将普遍适用于欧盟境内运营的企业。高风险AI系统(例如嵌入基础设施、就业、银行和医疗等基本服务以及司法系统中的应用)的开发人员将有36个月的时间(至2027年8月)遵守风险评估和人工监督等相关规定。

🥳 **违反规定将面临巨额罚款:** 未遵守规定将导致对违规公司处以罚款,罚款金额为全球年收入的百分比或固定金额。违反禁止使用的系统将面临最高罚款:3500万欧元(约合3800万美元)或全球年收入的7%。

Illustration by Cath Virginia / The Verge | Photos by Getty Images

The AI Act is a sweeping set of rules for technology companies operating in the EU, which bans certain uses of AI tools and puts transparency requirements on developers. The law officially passed in March after two years of back and forth and includes several phases for compliance that will happen in waves.

Now that the full text has been published, it officially starts the clock for compliance deadlines that companies must meet. The AI Act will come into law in 20 days, on August 1st, and future deadlines will be tied to that date.

The new law prohibits certain uses for AI, and those bans are part of the first deadline. The AI Act bans application uses “that threaten citizens’ rights,” like biometric categorization to deduce information like sexual orientation or religion, or the untargeted scraping of faces from the internet or security camera footage. Systems that try to read emotions are banned in the workplace and schools, as are social scoring systems. The use of predictive policing tools is also banned in some instances. These uses are considered to have an “unacceptable risk,” and tech companies will have until February 2nd, 2025, to comply.

Nine months after the law kicks in, on May 2nd, 2025, developers will have codes of practice, a set of rules that outlines what legal compliance looks like: what benchmarks they need to hit; key performance indicators; specific transparency requirements; and more. Three months after that — so August 2025 — “general purpose AI systems” like chatbots must comply with copyright law and fulfill transparency requirements like sharing summaries of the data used to train the systems.

By August 2026, the rules of the AI Act will apply generally to companies operating in the EU. Developers of some “high risk” AI systems will have up to 36 months (until August 2027) to comply with rules around things like risk assessment and human oversight. This risk level includes applications integrated into infrastructure, employment, essential services like banking and healthcare, and the justice system.

Failure to comply will result in fines for the offending company, either a percentage of total revenue or a set amount. A violation of banned systems carries the highest fine: €35 million (about $38 million), or 7 percent of global annual revenue.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI法案 欧盟 人工智能 监管
相关文章