少点错误 2024年07月04日
Introduction to French AI Policy
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了法国人工智能政策的关键利益相关者及其立场,以及他们如何影响国际人工智能政策。法国政府在人工智能政策方面采取了加速和创新的立场,但对人工智能风险的关注较少。文章还探讨了法国人工智能安全社区的现状,以及该国在人工智能治理方面面临的挑战。

🇫🇷 法国人工智能政策的关键利益相关者包括政府机构、学术界、企业和非营利组织。其中,由法国政府组建的生成式人工智能委员会发布了《人工智能:法国的雄心》报告,提出了25项建议,旨在推动法国成为人工智能领域的领导者。该报告主张投资人工智能人才培养、研发和计算能力,并强调了法国在人工智能领域与美国竞争的紧迫性。然而,该报告对人工智能的潜在风险持怀疑态度,认为大多数风险被夸大了,并主张开放源代码模式。

🛡️ 法国政府已决定成立一个国家人工智能评估中心,该中心将由法国国家计算机科学研究中心 INRIA 和法国标准实验室 LNE 共同管理。该中心将代表法国参与人工智能安全研究所网络,该网络是在韩国人工智能安全峰会上宣布成立的。此外,法国还有一些非营利组织致力于人工智能安全,例如 Centre pour la Sécurité de l'IA,该组织致力于提高公众和政策界对人工智能风险的认识,并开发人工智能风险的技术基准。

🚀 法国政府对人工智能风险的关注较少,这与该国主要人工智能利益相关者的立场有关。法国人工智能领域主要由风险怀疑者主导,他们推动了加速和创新,并对人工智能风险持谨慎态度。这导致了法国在国际人工智能治理中对人工智能安全问题的关注度较低。

⚠️ 文章指出,法国政府对人工智能安全问题缺乏关注,可能导致法国在人工智能安全峰会等国际平台上失去对人工智能安全的关注。此外,文章还预测,如果法国极右翼政党国民联盟上台,其主要人工智能顾问 Laurent Alexandre 可能将推动更多的投资和加速,而减少对安全的关注。

💡 文章的结论是,法国政府的人工智能政策受风险怀疑者的影响,导致了对人工智能风险的关注不足。这可能会导致法国在人工智能治理方面落后于其他国家,并面临更大的风险。

🤔 文章还探讨了法国人工智能社区的现状,指出法国人工智能安全社区规模较小,只有少数组织致力于人工智能风险的降低。这表明法国在人工智能安全方面面临着挑战,需要更多的资源和关注。

📈 文章强调了法国人工智能政策的加速和创新导向,以及对人工智能风险的关注不足。这引发了关于法国人工智能治理的担忧,以及法国在人工智能安全方面面临的挑战。文章还指出,法国政府对人工智能风险的关注度较低,可能导致法国在人工智能安全峰会等国际平台上失去对人工智能安全的关注。

Published on July 4, 2024 3:39 AM GMT

This post was written as part of the AI Governance Fundamentals course by BlueDot. I thank Charles Beasley and the students from my cohort for their feedback and encouragements.

Disclaimer: The French policy landscape is in rapid flux, after president Macron called for a snap election on 1st and 7th July. The situation is still unfolding, and the state of French AI policy may be significantly altered.

At various AI governance events, I noticed that most people had a very unclear vision of what was happening in AI policy in France, why the French government seemed dismissive of potential AI risks, and what that would that mean for the next AI Safety Summit in France.

The post below is my attempt at giving a quick intro to the key stakeholders of AI policy in France, their positions and how they influence international AI policy efforts.

My knowledge comes from hanging around AI safety circles in France for a year and a half, and working since January with the French Government on AI Governance. Therefore, I’m confident in the facts, but less in the interpretations, as I’m no policy expert myself.

Generative Artificial Intelligence Committee

The first major development in AI policy in France was the creation of a committee advising the government on Generative AI questions. This committee was created in September 2023 by former Prime Minister Elisabeth Borne.[1]

The goals of the committee were:

This committee was composed of notable academics and companies in the French AI field. This is a list of their notable member:

Co-chairs:

Notable members:

See the full list of members in the announcement: Comité de l'intelligence artificielle générative.

“AI: Our Ambition for France”

In March 2024, the committee published a report highlighting 25 recommendations to the French government regarding AI. An official English version is available.

The report makes recommendations on how to make France competitive and a leader in AI, by investing in training, R&D and compute.

This report is not anticipating future development, and treats the current capabilities of AI as a fixed point we need to work with. They don’t think about future capabilities of AI models, and are overly dismissive of AI risks.

Some highlights from the report:

The AI Action Summit

In November 2023, the UK organized the inaugural AI Safety Summit. At the end of the Summit, France announced it would host the next one. The date have been confirmed recently: 10-11 February 2025. The main organizer is Anne Bouverot, chair of the Generative Artificial Intelligence Committee mentioned above.

A major update is that the name was changed to “AI Action Summit”, and will now focus on five thematic areas, each led by an "Envoy to the Summit":

None of those organizers seem to think AI could pose a catastrophic risk in the coming years, or have even taken stances against concerns about catastrophic risks. This leads me to fear that the Summit might lose a large part of its AI Safety focus if efforts are not made to get safety back in the agenda.

Organizations working on AI policy and influencing it

Various companies, non-profits and governmental agencies influence the direction of AI policy in France. I listed only the most influential and most relevant organizations.

National AI Safety Institute

The French government has decided to create a National Center for AI Evaluation, which will be a joint organization under the public computer science research center INRIA, and the French standards lab LNE.[2]

This organization will represent France in the network of safety institutes, which was announced at the Korean AI Safety Summit.

Think-tanks

There are not a lot of think-tanks influencing AI policy in France. The leading one is Institut Montaigne, one of the most influential French think-tank, which has a division working on AI Governance.

The Future Society, a US and Europe based AI governance think-tank, also has some influence in France, but it’s not their priority.

Leading AI companies in France

There are a lot of AI companies popping up in France. I listed below the companies which have or could have an international influence, and who have a large policy influence.

France is also home to the AI labs of various AI companies Large research center from international companies

AI Safety and x-risk reduction focused orgs

France has a small AI Safety community (~20 people), so the only organization working on AI Safety with a strong focus on AI risk reduction is the Centre pour la Sécurité de l'IA (French center for AI safety), who is working on raising awareness of AI risks in both the general public and policy circles, as well as developing technical benchmarks for AI risks. It is an offshoot of EffiSciences, an organization dedicated to impactful research and reducing catastrophic risks.

Conclusion

As said in the intro, the political situation in France is in flux, and the key stakeholders of AI policy may change soon. If the far right party National Rally gets in power, their main AI advisor will probably be Laurent Alexandre, former doctor, transhumanist, and accelerationist. He will probably advocate for more investment, more acceleration, and less focus on safety. There may be changes in the organization of the Summit and its overall direction, but I expect most of the existing stakeholders to stay influential.

Overall, the position of the French government is influenced by actors skeptical of AI risks, who steer both national and international policy towards acceleration and innovation.

Given that those risk skeptical actors also exist in other countries, my theory for why the French government ended up less focused on AI risks than the UK is the lack of prominent actors raising the alarm about the risks. I don’t think that the French Government is impervious to AI safety arguments, I just think that barely anybody has tried presenting the AI Safety side of the debate.

  1. ^
  2. ^
  3. ^


Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能政策 法国 人工智能风险 AI安全 AI治理
相关文章