Published on July 2, 2025 1:46 AM GMT
Over the past 6-8 months, I have been involved in drafting AI policy recommendations and official statements directed at governments and institutions across the Global Majority: Chile, Lesotho, Malaysia, the African Commission on Human and Peoples' Rights (ACHPR), Israel, and others. At first glance, this may appear to be a less impactful use of time compared to influencing more powerful jurisdictions like the United States or the European Union. But I argue that engaging with the Global Majority is essential, neglected, and potentially pivotal in shaping a globally safe AI future. Below, I outline four core reasons.
1. National-Level Safeguards Are Essential in a Fracturing World
As global alignment becomes harder, we need decentralized, national-level safety nets. Some things to keep in mind:
- What if the EU AI Act is watered down tomorrow due to lobbying?The U.S. Biden Executive Order on AI has already been rolled back.
In such a world, country-level laws and guidance documents serve as a final line of retreat. Even modest improvements in national frameworks can meaningfully reduce the risk of AI misuse, particularly in high-leverage areas like biometric surveillance, automated welfare allocation, or predictive policing.
Moreover, in many Global Majority countries, the state remains the most powerful actor. When risks emerge, it is not always corporations but often ministries, police departments, or public-sector procurement decisions that determine outcomes. Consider the history of state-led atrocities enabled by surveillance or classification systems. Examples include Rwanda’s classification systems (during the 1994 genocide), which used bureaucratic data categories to identify targets, and Apartheid-era South Africa, which collected data to enforce racial segregation. Engaging with the government, building governance capacity, and public-sector-specific guardrails are therefore critical.
2. The Space Is Underserved and Entry Barriers Are Lower Than You Think
Engagement with Global Majority AI policy is still deeply neglected:
- In some consultations we contributed to, there were fewer than 10 public submissions.Governments often copy-paste or lightly adapt international frameworks like the UNESCO Recommendation on the Ethics of AI or the Council of Europe AI Treaty.Misunderstandings are frequent. For example, one policy document wrongly assumed the CoE Treaty was only open to European countries.
This creates significant leverage for meaningful influence. A single well-argued submission can shape a ministry’s perception of risks or clarify foundational governance issues. One doesn’t need millions in funding or a permanent office in the capital. In many cases, simple engagement and public comment can go a long way.
3. It Builds Toward a Pluralistic International Regime
Finally, championing safety-focused norms in the Global Majority may help lay the groundwork for a more robust and inclusive international framework for AI governance. Many countries in the Global South, especially middle powers (e.g. South Africa, Brazil, Indonesia), have historically played important convening roles in other domains (e.g. the Non-Aligned Movement, BRICS, the Cartagena Protocol).
In future AI governance scenarios, these countries could serve as trusted mediators between Global North and Global South perspectives.
4. Risks Must Be Contextualized to Local Settings
AI risks are not monolithic, and AI governance must cover the spectrum of risks. The challenges posed by AI systems in Lagos or Kuala Lumpur differ significantly from those in London or Brussels. Factors such as:
- Data scarcity or bias (e.g. models trained primarily on Western data misfiring in other linguistic or social contexts);Different power structures (where the risk of AI misuse stems more from state actors than from large corporations);Varied institutional capacity (e.g. lack of independent auditing bodies, digital literacy gaps);
Global frameworks like the EU AI Act or the OECD AI Principles often assume certain levels of institutional maturity or civil liberties protections. These assumptions can fall short.
Consider the case of autonomous weapon systems (AWS): over 80% of global conflicts over the past decade have occurred in Global South regions, >90% of the countries listed by ACLED for extreme, high, or turbulent levels of conflict are Global Majority. While the development of AWS is typically concentrated in technologically advanced countries, its deployment is more likely to happen in the Global Majority countries. These environments often serve as the testing ground for cutting-edge military technologies without meaningful global scrutiny. Western policy frameworks rarely prioritize this asymmetric risk, in part because the worst consequences will not be felt domestically.
Discuss