少点错误 01月20日
It is (probably) time for a Buterlian Jihad
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI发展的可能结果及应对策略。包括AGI发展的各类结局,如好、混合、糟糕的情况;提出应暂停AI发展,阐述了相关理由及可能面临的问题;还提到公众对AI的看法及应对AI发展的最佳行动等。

AGI发展的结果可分为好、混合、糟糕三类,如实现乌托邦或带来负面影响等。

应暂停AI发展,可通过国际合作等方式解决相关风险,如控制半导体发展等。

公众对AI看法不一,反对声音可能延缓AI研究,但也存在使其难以停止的风险。

当下攻击AI有两方面潜在弊端,包括AI暂停带来的问题及可能的反作用。

Published on January 20, 2025 5:55 AM GMT

We find ourselves at the precipice of a great upset; a mode switch of society. "Hinge of history" might not be the best description - after all, many decisions made in the past could have prevented the present day - but nonetheless we are uniquely poised to affect the outcome of AI now, and the influence of the common man will only diminish in the future. A tangle of paths lie ahead; it is up to us to assess these outcomes and avoid the maelstroms and future torment nexus.

What Lies Ahead

Generally, outcomes from AGI development might be grouped into three broad categories: "good", "mixed," and "clearly awful."

Good outcomes might be:

Ambivalent outcomes include:

and of course the bad outcomes, which include:

The overwhelming consensus among AI & alignment researchers, experts, and average Joes is that the "aligned AGI" outcome is extremely unlikely and the bad outcomes are significantly (1%+, although usually ranging from 10-90%) more likely. The most optimistic predictions of "good" outcomes usually come from groups that stand to benefit from public support for AI (e.g OpenAI employees & friends); these same groups, however, still publicly profess extremely significant credence (10%+!) for bad outcomes. 

Ultimately, the cost-benefit analysis for AI development is strongly against: we see a significant chance of overwhelmingly bad outcome, solid odds of ambivalent and mundane outcomes, and a razor-thin gamble of an extraordinarily good outcome. The timescale for this decision is ever shrinking; Metaculus puts AGI at about six years out. However, humanity is not otherwise beholden to a six-year clock: should we not develop AI further, we likely have ~hundreds of years to sort out our current problems before facing similar extinction level threats. As humanity develops, we will find ourselves better equipped to revisit artificial intelligence in the future, and possibly approach utopic aligned ASI in a safer and more cautious way. 

Best Course of Action

Personally, I'd prefer to see humanity develop itself rather than outsource its very soul to thinking machines. Nonetheless, many technologists may wish to see some of the fruits and mundane improvements brought about by AI. In either case, the best course of action remains to halt its development now - at a near human level with humans in control.

"AI pause" critics bring up the risks of a compute overhang or development by competing nations (e.g China). These are real concerns, but can absolutely be addressed. Progress on compute, much like other progress, can be halted and stalled: the research and development takes place at an extremely limited number of organizations (NVIDIA, TSMC, ASML, etc.), requires large capital & human experience investments, and is easily disrupted by government bodies. Should the US decide to halt or slow semiconductor development, it is highly likely that it could do so (the rest of the world would likely collaborate, as they are currently behind in the AI race and would reasonably fear the same outcomes we do).

Likewise, risks from competing nation states (e.g China) could be mitigated via existing intentional collaboration strategies - nuclear proliferation management techniques like inspections & intelligence agencies keeping check on each other could feasibly serve as a means for the world to prevent the development of AI. 

I am by no means an expert in international law; and likely solutions in this regard will differ significantly from what I have suggested. Nonetheless,  I think there are strong chances of international collaboration against AI, should the political will exist.

Appeal to Urgency

Right now public opinion of AI is at an all-time low:

A sudden outcry from the public against AI is likely to stall or help pause AI research; a united political front or movement could likely delay it by several years or perhaps decades, buying us time to solve technical and social alignment challenges or develop a long-term solution. Generally, political will is strongly swayed by positive feedback loops:

The same loops that make it easier for an anti-AI movement to propagate make it dangerously easy for AI to become a "locked-in" technology that cannot be stopped. Once a third of the population is hooked on character.ai, $100B+ in revenue is rolling in, or the US/China arms race takes off, it's going to be very, very, difficult to argue for a pause or halt to AI development.

Downsides

Potential downsides to a present-day attack on AI can be broken into two groups: downsides from AI pauses in general, and the effects of a concerted effort "backfiring". 

AI Pause

"AI-pause-derived" issues are largely based on the idea of an AI arms race, mostly with China. China is not run by clueless people; most users on this website went from unconcerned with AI to highly concerned with nothing but well-reasoned arguments. If China thinks it is in her best interest to not develop the Torment Nexus, she will not develop the Torment Nexus - and the best way to convince China of this is to make strides towards halting it here in the United States. 

Compute Overhang issues can be resolved by simply not developing semiconductors further. Numerical computing research for medicine, aerospace, and other useful fields lags behind hardware by several years; we will slow, but not eliminate, improvement in this area for a decade even if we stopped developing semiconductors further. Any software developer would know about the many, many orders of magnitude of inefficiency in the vast majority of computer programs we use today.

Backfiring

The largest hazard here is likely the formation of a partisian (i.e right-wing vs. left-wing US politics) divide on AI, which will allow the issue to be co-opted and defeated much like other grassroots movements in the US prior. The appeals to different political groups are likely to be wildly different and potentially mutually exclusive - nonetheless a concerted effort to bring up this point ahead of time could help prevent such a thing from happening.

A massive outcry of public support with the reason of "AI will be incredibly powerful" might also drive further investment (this is a key part of OpenAI's pitch); arguments against should likely focus on "AI will either flop or cause damage; neither are good" and similar points.

Battle Plan

I am too far removed from the land of SF, alignment, and US politics to effectively formulate a battle plan for coordinated anti-AI support. Nonetheless, I believe this forum contains many capable readers and collectively could produce significant results. 

My suggestions, to be taken independently in whatever plans you all cook up, are as follows:

Conclusions & Clarifications

It's only going to get more difficult to argue for an AI pause. Humanity can solve its own problems without AI; we don't need to build it. Arguments against AI pause or halt are not great and certainly aren't strong enough to continue sacrifice to Moloch. You, the humble LW reader, can make an impact on the ~hundreds of people that you know; even priming them against whatever shenanigans develop in the near future might help sway the outcome of the world. 

As a clarifying note: I use the terms AI/AGI/ASI to mean silicon-based intelligence intended to mimic, replace, or supersede the decision-making, planning, and agency of the human brain. Software tools such as image upscaling algorithms, speech recognition, or linear regression are not included in this definition, and while they may have their own pros & cons I don't wish to involve them in the discussion of this post.

I also don't intend this post to be an ironclad argument in favor of AI pause. Rather, I think it presents a sentiment that others on this site approximately share, and would serve as a useful jumping-off point for similar-minded users to plot and scheme ways to make sure AGI doesn't get built. 

As a final note: the term "Butlerian Jihad" is taken from Dune and describes the shunning of "thinking machines" by mankind. It does not mean, in this context, terrorism or similar violent means of preventing the development of AI. I do not think these would be effective measures; right now, the best strategy lies firmly in the court of public opinion.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI发展 暂停AI 公众看法 潜在弊端
相关文章