Communications of the ACM - Artificial Intelligence 04月09日 01:17
AI Policies Redux
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了人工智能(AI)治理的最新进展,特别是对AI政策的演变进行了深入分析。文章回顾了从早期AI伦理框架到近期欧盟AI法案的制定过程,并强调了在AI治理方面全球合作的重要性。作者提出了对未来AI治理的展望,认为重点将从制定规范转向实施与执行。文章还强调了公众参与、技术中立以及对AI发展保持谨慎态度的重要性,呼吁在AI时代保持人类对技术的控制,并促进可持续发展。

🌍 全球AI政策参与度显著提高:世界各国对AI政策的关注度日益增长,但公众对其认知相对滞后。文章指出,需要通过更深入的报道来提升公众对AI治理的理解,从而促进知情的公共政策制定。

🕰️ AI政策发展是一个演进过程:从OECD AI原则到欧盟AI法案,AI政策的制定和理解都在不断深化。文章强调,理解AI治理的长期发展历程,对于评估和完善现有政策至关重要,尤其是在生成式AI出现之后。

⚖️ 保持平衡的AI政策制定至关重要:AI政策制定者需谨慎行事,避免在政策制定上过于激进或保守。文章强调了在AI治理中,如影响评估、监督机构的建立等,已有的治理要素的重要性,并提倡技术中立的定义。

🚀 AI治理进入实施与执行的新阶段:2019-2024年是AI治理规范确立的时期,而2025-2029年将是实施和执行这些规范的关键时期。文章强调了政府应将重心从制定原则转移到推进已批准的原则上,并警惕可能出现的倒退。

🧐 评估AI治理的有效性至关重要:文章强调了对AI治理策略进行比较评估的必要性,并介绍了CAIDP的《AI与民主价值观指数》,该指数为评估和比较各国AI政策提供了方法,以衡量其与民主价值观的契合度。

⚠️ 强调AI治理的紧迫性:文章警告说,在AI领域,公司和国家都在快速发展,但对潜在风险的了解有限。文章强调了实施保障措施和护栏以确保可持续发展的重要性,并指出了现有AI系统缺乏透明度和问责制的问题。

Several years ago, I compiled the first reference book on AI policies.1 I aimed to provide a ready reference for the emerging field of artificial intelligence similar to the books I had published on privacy law.2  At that time, we noted the rapid explosion of AI ethics frameworks, but it was still early days for AI governance. The OECD had just finalized the first AI principles endorsed by national governments. The Universal Guidelines for AI, published the year before, were gaining influence among AI policymakers. But work on the EU AI Act had not yet started. At the Council of Europe, the first steps were taken toward a global AI treaty.

This year, I returned to the project with an updated AI Policy Sourcebook3. As I wrote in the introduction to the new volume, a lot has happened in five years. Governments have raced to develop national AI strategies, amend current laws, and enact new laws. Remarkable progress has been made by international organizations. The early, now venerable, OECD AI Principles were recently updated to account for recent developments in AI technology. The EU AI Act was adopted in 2024, and implementation has begun. Forty countries have endorsed the Council of Europe Framework Convention on AI. At the United Nations, there is consensus on the need to ensure safe, secure, and trustworthy AI. But challenges remain in many domains: sustainability, autonomous weapons, algorithmic transparency, labor impacts, copyright, and more.

I provide brief commentary on the various AI governance frameworks provided in the Sourcebook. Then I shared a few insights, having both followed the development of the AI governance frameworks over the last several years and also helping to develop and draft several.

First, the level of engagement among countries worldwide in AI policy is striking and encouraging. A topic that was esoteric less than a decade ago is now front and center for many governments. However, the public remains largely unaware of these initiatives, particularly in the United States. Reporting on AI tends to swing between the hype of overstated innovation and the doom of exaggerated concern. Thoughtful reporting on AI governance, focusing on progress and setbacks, would engage the public, strengthen democratic institutions, and promote the development of well-informed public policy.

Second, the development of AI policy should be viewed as an evolutionary process. From the OECD AI Principles to the EU AI Act, we can see the rapid development and sophistication in the understanding of AI policy. More issues are considered and in greater detail. New initiatives build on earlier initiatives. It is essential for those who joined the conversation after the release of ChatGPT in late 2022 to recognize the long history of efforts by governments to regulate AI. Even this collection provides only a recent timeline of a topic that goes back to the debate over autonomous weapons more than 40 years ago.

Third, AI policymakers must be careful to steer a steady course, neither veering too quickly in one direction or another. The introduction of generative AI in 2022 posed new challenges, but many of the common elements for effective governance were already well known: the need for impact assessments, the CAIDP AI Policy Sourcebook establishment of supervisory authorities, the allocation of rights and responsibilities for those who use AI systems and those who design, develop, and deploy AI systems. Coordinating new AI regulations with preexisting rules for automated decision-making is not a simple task. However, it is a mistake to exclude from a modern definition of AI systems, rule-based expert systems (symbolic AI) that have defined the field from the start. A good definition of AI should remain technology-neutral.

Fourth, we are now entering a new phase of AI governance. If the period 2019-2024 could be described as Establishing Norms for AI Governance, the period 2025-2029 is about the Implementation and Enforcement of AI Governance Norms. This is when the hard work begins. Governments must spend less time articulating AI principles and more time advancing the principles they have already endorsed. There is a real risk in this movement of moving sideways or even backward. With AI governance, we do not have the luxury of time. AI is evolving rapidly. Regulation must do so as well.

Fifth, it is not too soon to ask questions about regulatory convergence and divergence, progress and setbacks, which governance strategies are working and which are in need of repair. Public policy benefits from these comparative assessments. CAIDP’s comprehensive report, the AI and Democratic Values Index, provides the basis for this work.4 With the AI and Democratic Values Index, we provide a narrative survey of AI policies and a methodology to assess AI policies and practices against democratic values. This methodology provides an opportunity to compare national AI policies at a moment in time and to analyze trends over time.

Finally, we need to underscore the urgency of AI governance. Companies and countries are rushing quickly into a future they do not fully understand. The leading AI experts caution that advanced AI systems are not reliable or trustworthy. They have urged us to pause, at least to implement the safeguards and guardrails necessary for sustainable progress. Others warn that many of the AI systems already deployed lack meaningful transparency and accountability. Bias is embedded and replicated at scale.

It is not too soon to think critically about the consequences of the AI age and how we are to maintain human control over this rapidly evolving technology. Human reason remains central to this task. With the publication of this reference book on early efforts to govern AI, we hope to advance that work. Sapere aude!5


1. Marc Rotenberg, ed., The AI Policy Sourcebook (2019)

2. Marc Rotenberg, ed., The Privacy Law Sourcebook (1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2016, 2018, 2020).

3. Marc Rotenberg and Eleni Kyriakides, eds., The AI Policy Sourcebook (2025)

4. April Yoder, Marc Rotenberg, and Merve Hickok, eds., AI and Democratic Values Index (CAIDP 2025)

5. Immanuel Kant, What is Enlightenment? (1784). Marc Rotenberg, Artificial Intelligence and Galileo’s Telescope,  Review of the Age of AI by Henry Kissinger, Eric Schmidt, and Douglas Huttenlocher, Issues in Science and Technology (December 2021) (discussing Kant’s views on the role of human reason as applied to AI).

Marc Rotenberg is founder and executive director of the Center for AI and Digital Policy, a global network of AI policy experts and human rights advocates.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能治理 AI政策 欧盟AI法案 OECD AI伦理
相关文章