少点错误 01月21日
The Intractability of AI Resilience in Developing Nations
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章指出,虽然一些发达国家正在增强对高级人工智能系统危害的社会适应能力,但发展中国家在这方面可能滞后。许多社会适应机制依赖于发展中国家缺乏的基础设施。在这些地区使用强大的人工智能模型可能会造成严重危害。因此,发达国家不应以自身社会适应能力为借口,草率发布人工智能模型,而应考虑脆弱国家的需求。文章探讨了发展中国家社会适应能力的不足之处,并提出了潜在的风险和缓解策略,强调全球南方积极参与人工智能发展和分享其益处的重要性。

⚠️ 发展中国家缺乏应对高级AI风险的制度能力:许多发展中国家在国家报告、监测AI系统使用、网络安全和生物攻击防御等方面的能力不足,这些都是社会适应的关键要素。

⚔️ 暴力冲突和技术知识匮乏加剧风险:在冲突地区,AI可能被滥用于军事目的,而低技术知识水平使得政府难以有效监督和应对AI威胁。

📉 贫困和不稳定环境使恢复更加困难:一旦技术系统遭受攻击,在缺乏技术专长和资源的环境中,恢复将变得更加困难,可能导致更严重的后果。

🔒 发展中国家面临“发展锁定”风险:不安全的AI系统可能被用于破坏发展中的数字基础设施,导致这些地区依赖外部技术,从而面临被剥削的风险。

Published on January 21, 2025 3:31 PM GMT

TLDR: Most of the developing world lacks the institutional capacity to adapt to powerful, unsecure AI systems by 2030. Incautious model release could disproportionately affect these regions. Enhanced societal resilience in frontier AI states is consequently no licence for incautious release. 

‘We should develop more societal resilience to AI-related harms’ is now a common refrain in AI governance. The argument goes that we should try to reduce the risks of large frontier models as far as possible, but some amount of harm is inevitable, and technological developments will always see potentially harmful models fall outside the regulatory net. The countermeasure, born both of practicality and necessity, is to develop societal resilience through adaptation to avoid, defend against and remedy harms from advanced AI systems. 

This is a sensible idea, and one I think more people should do work on. 

However, even if societal resilience against the harms of advanced AI systems (from here on, ‘societal resilience’) in frontier AI states was to improve significantly, it seems likely that the rest of the world is unlikely to develop at the same speed. Many societal resilience mechanisms depend on foundations which regions in the developing world do not have. If powerful models were used in these regions, they might still be used to perpetrate significant harms. 

So if you only takeaway two things from this blog, it's:

    Increased societal resilience in some areas does not 1) preclude the risks of AI models being misused in other areas and therefore 2) provide a blank cheque for those models to be deployed. Regulation and deployment decisions based on the resilience of developed world states that ignored the vulnerability of nations that cannot implement similar protections might create severe harm.

This short exploratory post aims to do three things:

    Examine some foundations of societal resilience and why they may be lacking in the developing worldSuggest some potential harms arising from policies assuming such resilience, and Explore some avenues to mitigate these risks. 

This is still a new research area, however, so I’m very keen to hear any feedback you might have. 

    ‘Societal Resilience’ requires strong foundations

Successful implementation also depends on the existence of appropriate institutions for resolving collective action problems, and on organisations’ technical, financial and institutional capacities for monitoring and responding to AI risk. 

-- From Bernardi et al., ‘Societal Adaptation to Advanced AI’

The requirements for effectively adapting society to advanced AI are demanding. At a quick glance, here are a few things that might be useful. We don’t yet have enough of these in the developing world, but there has been a start: 

On the other hand, many countries in the developing world don’t have these services. Notably, few of them are straightforwardly technical: they relate to institutional trust, social networks, and civic infrastructure that it is famously difficult to just ‘accelerate’ into resilient configurations. Moreover, I worry that many regions are beset by other features like: 

These are currently tentative beliefs. I intend to a) revise this post as soon as I can find evidence for/against these claims and b) learn more about successful case studies to better understand how these situations might change over time. If you have strong opinions here, let me know!

I strongly believe that it is essential for the Global South to actively participate in shaping the development of AI and to share in its benefits. However, I am equally concerned about the potential consequences of a company with a myopic frontier-AI-state perspective inadvertently releasing a model that falls into the hands of insurgents in conflict-affected regions, leading to catastrophic outcomes.

Here are some arguments for fast societal resilience building in the developing world I’m not convinced by:

Admittedly, there’s a risk of motte-and-bailey arguments with cases like these. ‘The developing world’ is a broad concept, and one risks accidentally using it to conflate India (where some societal adapation no doubt would work) with the Democratic Republci of Congo (where I would be a lot more skeptical). 

I’m hoping to be directionally correct, but I think the detail of these distinctions are important and something I would be keen to explore further. For instance, it might be that there is a band of the developing world where technical literacy is low enough to reduce the risks of misuse to almost nothing, even though that region would in theory be extremely vulnerable to cyberattacks.

2. AI diffusion in the developing world could create notable risks

The risks from powerful AI systems have been well-covered elsewhere, the risks of AI and biosecurity in the global south particularly so. Here, I just want to note two additional stories that I think could result from the premature development of broadly capable, accessible, and unsecured AI in developing regions. 

Note that I don’t know that these will happen, and the benefits might outweigh the risks in some cases nonetheless, but they might still be worth being mindful of. I would be grateful for proxies or analogies that could help build these intuitions further. 

Developmental Lock-in

Several regions in Africa are experiencing developmental lock-in, where both parties try to undermine the other side, keeping the level of progress minimal. Unsecured AI systems could exacerbate this.

Consider a regional group that wants to build a local digital banking system. However, any attempt they make is vulnerable to hacking attempts involving AI systems accessed via the darkweb, or supplied by external organisations, which enemies in the region employ. This means that digital forms of theft predominate until the idea to create a banking system is scrapped altogether, keeping the region bound to physical currency. Later, an external third-party might step in with a fully-formed off-the-shelf AI-protected banking system, and the region might choose to adopt this. However, this means that they have outsourced the technical literacy required to build such a system to a third party, and they may be vulnerable to extortion or manipulation by that group in future. 

(Note: in my mind, a lot of what it takes to build a preliminary state over the next 50 years will relate to building digital infrastructure, both to coordinate actors in diverse areas and to organise information for decision making purposes a la James C. Scott. In that way, I think the nation-building efforts of 2030-2050 are likely to be even more vulnerable to cyberespionage and attack than those of previous generations, and I’m also assuming the base rate of cyberattacks will rise).

Undermining Nascent Governments

The AI arms race is often presented as the AGI arms race between the US and China, but I expect there to be a race for AI-supported epistemic disruption tools (as well as military tools) in the rest of the digitalising war-waging world eventually, one that policymakers in the west might be well-advised to delay.

Consider the position of a aspirationally-democratic government in an unstable state that has experienced notable civil wars. An insurgent group is using a foreign-developed generative model to generate disinformation about them across many hundreds of languages spoken in the region which proliferate online and sow dissent. Unlike the well-established democratic governments to the north, this government has newly taken power and does not have the connections with large technology corporations to take action to stem the flow, nor are it’s population familiar enough with fake news, many of them having recently obtained internet access for the first time. They have little choice but to take violent action to suppress the insurrection, lowering them to the brutality of the regime they sought to replace. 

3. What can we do about this? 

We also need to build adaptation and resilience infrastructure and ensure that better tech diffuses faster and wider.

The current trajectory is to spend more people and resources developing programmes for societal adaptation to advanced AI in developing world societies and to track how effective these might be. The obvious extension is to explore how viable these policies might be in different areas in the developing world, and whether governments would be able to implement them before e.g. 2030. These both seem worthwhile. 

Here are some less obvious things that we might prioritise: 

Of course, another thing to do would be to buy time against diffusion by limiting the sharing and leaking of key information about models and imposing better access protocols. I’ve written about that here and here

Conclusion

The developing world has a huge amount to benefit from advanced AI technologies. I'm truly excited by the prospect that advanced AI might bring a degree of growth and economic independence that contributes to countries in the developing world achieving sovereignty from external powers for the first time in centuries. 

However: it would be a shame to have the post-AGI era of developmental studies be one entirely focused on dealing with the aftershocks of sharing powerful AI tools to warring nations (perhaps not unlike dropping the idea of nation states on countries that lacked the resilience infrastructure) where states moving towards independence are once again found beholden to the developing nations that promise to pull them out of it. 

This would not just be a worse world to live in for the global majority, but risk being a far more polarised world between AGI superpowers.

Nonetheless, I suspect that there are significant parts of this story left out, and I welcome comments and corrections that could help build this line of thinking or counterarguments against it. 

Thanks to Jamie Bernardi for suggestions. All errors/opinions my own. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI风险 发展中国家 社会适应 技术差距 全球安全
相关文章