Unite.AI 前天 01:02
Striking the Balance: Global Approaches to Mitigating AI-Related Risks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了全球不同国家和地区在人工智能(AI)监管方面的差异。美国倾向于创新优先,依赖市场解决方案和自愿指导方针,但监管框架较为分散。欧盟则采取预防性措施,通过《人工智能法案》对高风险AI系统进行严格监管,对欧盟市场内的AI提供者提出了合规要求。英国则采取了介于两者之间的“轻量级”框架,侧重于安全、公平和透明。文章强调了国际合作在制定AI基本标准、应对风险和促进创新的重要性,并指出OECD等国际组织在推动全球AI治理方面发挥的关键作用。

🚀 美国监管侧重创新,采用市场主导和自愿指导方针。美国在AI监管上更倾向于鼓励创新,依赖市场力量和自愿遵守的指导方针。尽管如此,也出台了《国家人工智能倡议法案》等重要立法,并发布了关于安全、可靠和值得信赖的AI的行政命令。然而,这种方式也受到批评,认为其规则分散,缺乏可执行标准,且在隐私保护方面存在不足。

🛡️ 欧盟采取预防性措施,实施严格的AI法规。欧盟通过《人工智能法案》对AI进行全面监管,采用风险导向的方法,对高风险AI系统(如医疗保健和关键基础设施)实施严格规则。对于低风险应用,监管较少,而某些应用(如政府运营的社会评分系统)则被完全禁止。欧盟的法规对在欧盟运营或向欧盟市场提供AI解决方案的提供者具有约束力。

🇬🇧 英国采取“轻量级”框架,寻求平衡。英国的AI监管方法介于美国和欧盟之间,侧重于安全、公平和透明等核心价值观。英国政府发布了《人工智能机遇行动计划》,并成立了人工智能安全研究所(AISI),以评估先进AI模型的安全性。然而,英国的方法也面临批评,包括执法能力有限和缺乏部门间立法协调。

It’s no secret that for the last few years, modern technologies have been pushing ethical boundaries under existing legal frameworks that weren’t made to fit them, resulting in legal and regulatory minefields. To try and combat the effects of this, regulators are choosing to proceed in various different ways between countries and regions, increasing global tensions when an agreement can’t be found.

These regulatory differences were highlighted in a recent AI Action Summit in Paris. The final statement of the event focused on matters of inclusivity and openness in AI development. Interestingly, it only broadly mentioned safety and trustworthiness, without emphasising specific AI-related risks, such as security threats. Drafted by 60 nations, the UK and US were conspicuously missing from the statement’s signatures, which shows how little consensus there is right now across key countries.

Tackling AI risks globally

AI development and deployment is regulated differently within each country. Nonetheless, most fit somewhere between the two extremes – the United States’ and the European Union’s (EU) stances.

The US way: first innovate, then regulate

In the United States there are no federal-level acts regulating AI in particular, instead it relies on market-based solutions and voluntary guidelines. However, there are some key pieces of legislation for AI, including the National AI Initiative Act, which aims to coordinate federal AI research, the Federal Aviation Administration Reauthorisation Act and the National Institute of Standards and Technology’s (NIST) voluntary risk management framework.

The US regulatory landscape remains fluid and subject to big political shifts. For example, in October 2023, President Biden issued an Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, putting in place standards for critical infrastructure, enhancing AI-driven cybersecurity and regulating federally funded AI projects. However, in January 2025, President Trump revoked this executive order, in a pivot away from regulation and towards prioritising innovation.

The US approach has its critics. They note that its “fragmented nature” leads to a complex web of rules that “lack enforceable standards,” and has “gaps in privacy protection.” However, the stance as a whole is in flux – in 2024, state legislators introduced almost 700 pieces of new AI legislation and there have been multiple hearings on AI in governance as well as, AI and intellectual property. Although it’s apparent that the US government doesn’t shy away from regulation, it’s clearly looking for ways of implementing it without having to compromise innovation.

The EU way: prioritising prevention

The EU has chosen a different approach. In August 2024, the European Parliament and Council introduced the Artificial Intelligence Act (AI Act), which has been widely considered the most comprehensive piece of AI regulation to date. By employing a risk-based approach, the act imposes strict rules on high-sensitivity AI systems, e.g., those used in healthcare and critical infrastructure. Low-risk applications face only minimal oversight, while in some applications, such as government-run social scoring systems are completely forbidden.

In the EU, compliance is mandatory not only within its borders but also from any provider, distributor, or user of AI systems operating in the EU, or offering AI solutions to its market – even if the system has been developed outside. It’s likely that this will pose challenges for US and other non-EU providers of integrated products as they work to adapt.

Criticisms of the EU’s approach include its alleged failure to set a gold standard for human rights. Excessive complexity has also been noted along with a lack of clarity. Critics are concerned about the EU’s highly exacting technical requirements, because they come at a time when the EU is seeking to bolster its competitiveness.

Finding the regulatory middle ground

Meanwhile, the United Kingdom has adopted a “lightweight” framework that sits somewhere between the EU and the US, and is based on core values such as safety, fairness and transparency. Existing regulators, like the Information Commissioner's Office, hold the power to implement these principles within their respective domains.

The UK government has published an AI Opportunities Action Plan, outlining measures to invest in AI foundations, implement cross-economy adoption of AI and foster “homegrown” AI systems. In November 2023, the UK founded the AI Safety Institute (AISI), evolving from the Frontier AI Taskforce. AISI was created to evaluate the safety of advanced AI models, collaborating with major developers to achieve this through safety tests.

However, criticisms of the UK’s approach to AI regulation include limited enforcement capabilities and a lack of coordination between sectoral legislation. Critics have also noticed a lack of a central regulatory authority.

Like the UK, other major countries have also found their own place somewhere on the US-EU spectrum. For example, Canada has introduced a risk-based approach with the proposed AI and Data Act (AIDA), which is designed to strike a balance between innovation, safety and ethical considerations. Japan has adopted a “human-centric” approach to AI by publishing guidelines that promote trustworthy development. Meanwhile in China, AI regulation is tightly controlled by the state, with recent laws requiring generative AI models undergo security assessments and align with socialist values. Similarly to the UK, Australia has released an AI ethics framework and is looking into updating its privacy laws to address emerging challenges posed by AI innovation.

How to establish international cooperation?

As AI technology continues to evolve, the differences between regulatory approaches are becoming increasingly more apparent. Each individual approach taken regarding data privacy, copyright protection and other aspects, make a coherent global consensus on key AI-related risks more difficult to reach. In these circumstances, international cooperation is crucial to establish baseline standards that address key risks without curtailing innovation.

The answer to international cooperation could lie with global organisations like the Organisation for Economic Cooperation and Development (OECD), the United Nations and several others, which are currently working to establish international standards and ethical guidelines for AI. The path forward won’t be easy as it requires everyone in the industry to find common ground. If we consider that innovation is moving at light speed – the time to discuss and agree is now.

The post Striking the Balance: Global Approaches to Mitigating AI-Related Risks appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能监管 AI法规 国际合作 美国AI 欧盟AI
相关文章