少点错误 07月02日 09:46
Why Engaging with Global Majority AI Policy Matters
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章强调了在全球人工智能治理中关注全球大多数国家(Global Majority)的重要性。作者认为,在日益分裂的世界中,国家层面的安全保障至关重要。由于对这些地区的关注度较低,参与其中的影响力和进入门槛也相对较低。此外,在全球多数国家中倡导安全导向的规范,有助于构建更具包容性的国际框架。文章还指出,人工智能带来的风险并非一成不变,治理措施需要根据当地情况进行调整。

🛡️国家级保障至关重要:鉴于全球共识难以达成,国家层面的安全措施成为最后的防线。即使是国家框架的微小改进,也能显著降低人工智能被滥用的风险,尤其是在生物特征识别、自动化福利分配或预测性警务等高风险领域。此外,在许多全球多数国家,政府仍然是最有影响力的参与者,因此与政府合作、建立治理能力至关重要。

💡参与度低,进入门槛较低:对全球多数国家的人工智能政策的关注度严重不足,这为有意义的影响创造了巨大空间。有时,一份论证充分的意见就能塑造一个部门对风险的看法或澄清基本的治理问题。不需要数百万美元的资金或在首都设立永久办事处,简单的参与和公开评论就能产生重大影响。

🌍 构建多元国际制度:在全球多数国家倡导以安全为重点的规范,有助于为更强大、更具包容性的人工智能治理国际框架奠定基础。这些国家,尤其是中等强国,历史上在其他领域发挥了重要的召集作用,未来在人工智能治理中可以充当全球北方和全球南方观点的可信调解人。

⚠️ 风险需结合当地情况:人工智能带来的风险并非一成不变,治理措施必须涵盖所有风险。例如,在拉各斯或吉隆坡,人工智能系统带来的挑战与在伦敦或布鲁塞尔大相径庭,例如数据稀缺或偏见、不同的权力结构以及不同的制度能力。欧盟人工智能法案或经合组织人工智能原则等全球框架常常忽略了这些差异。

Published on July 2, 2025 1:46 AM GMT

Over the past 6-8 months, I have been involved in drafting AI policy recommendations and official statements directed at governments and institutions across the Global Majority: Chile, Lesotho, Malaysia, the African Commission on Human and Peoples' Rights (ACHPR), Israel, and others. At first glance, this may appear to be a less impactful use of time compared to influencing more powerful jurisdictions like the United States or the European Union. But I argue that engaging with the Global Majority is essential, neglected, and potentially pivotal in shaping a globally safe AI future. Below, I outline four core reasons.

1. National-Level Safeguards Are Essential in a Fracturing World

As global alignment becomes harder, we need decentralized, national-level safety nets. Some things to keep in mind:

In such a world, country-level laws and guidance documents serve as a final line of retreat. Even modest improvements in national frameworks can meaningfully reduce the risk of AI misuse, particularly in high-leverage areas like biometric surveillance, automated welfare allocation, or predictive policing.

Moreover, in many Global Majority countries, the state remains the most powerful actor. When risks emerge, it is not always corporations but often ministries, police departments, or public-sector procurement decisions that determine outcomes. Consider the history of state-led atrocities enabled by surveillance or classification systems. Examples include Rwanda’s classification systems (during the 1994 genocide), which used bureaucratic data categories to identify targets, and Apartheid-era South Africa, which collected data to enforce racial segregation. Engaging with the government, building governance capacity, and public-sector-specific guardrails are therefore critical.

2. The Space Is Underserved and Entry Barriers Are Lower Than You Think

Engagement with Global Majority AI policy is still deeply neglected:

This creates significant leverage for meaningful influence. A single well-argued submission can shape a ministry’s perception of risks or clarify foundational governance issues. One doesn’t need millions in funding or a permanent office in the capital. In many cases, simple engagement and public comment can go a long way.

3. It Builds Toward a Pluralistic International Regime

Finally, championing safety-focused norms in the Global Majority may help lay the groundwork for a more robust and inclusive international framework for AI governance. Many countries in the Global South, especially middle powers (e.g. South Africa, Brazil, Indonesia), have historically played important convening roles in other domains (e.g. the Non-Aligned Movement, BRICS, the Cartagena Protocol).

In future AI governance scenarios, these countries could serve as trusted mediators between Global North and Global South perspectives.

4. Risks Must Be Contextualized to Local Settings

AI risks are not monolithic, and AI governance must cover the spectrum of risks. The challenges posed by AI systems in Lagos or Kuala Lumpur differ significantly from those in London or Brussels. Factors such as:

Global frameworks like the EU AI Act or the OECD AI Principles often assume certain levels of institutional maturity or civil liberties protections. These assumptions can fall short.

Consider the case of autonomous weapon systems (AWS): over 80% of global conflicts over the past decade have occurred in Global South regions, >90% of the countries listed by ACLED for extreme, high, or turbulent levels of conflict are Global Majority. While the development of AWS is typically concentrated in technologically advanced countries, its deployment is more likely to happen in the Global Majority countries. These environments often serve as the testing ground for cutting-edge military technologies without meaningful global scrutiny. Western policy frameworks rarely prioritize this asymmetric risk, in part because the worst consequences will not be felt domestically.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 全球治理 发展中国家 AI政策
相关文章