少点错误 2024年10月21日
The Personal Implications of AGI Realism
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了超级智能即将到来的可能性,并分析了其对人类社会和个体的影响。作者认为,超级智能的出现将带来巨大的变革,可能改变人类社会的面貌,甚至颠覆我们的生活方式。文章还强调了人类在面对超级智能的到来时,需要保持警惕,并思考如何应对这种前所未有的挑战。

🤖 **超级智能的到来是不可避免的:**文章指出,超级智能的出现并非科幻小说中的幻想,而是科技发展趋势的必然结果。各种迹象表明,超级智能可能在未来几十年内出现,甚至有可能在本世纪末实现。

🤯 **超级智能将带来巨大的变革:**超级智能的出现将对人类社会产生深远的影响,它可能改变我们的生活方式、工作方式,甚至改变我们对世界的理解。文章指出,超级智能有可能带来巨大进步,但也可能带来巨大风险。

🤔 **人类该如何应对超级智能:**文章认为,人类需要认真思考如何应对超级智能的到来。我们应该做好准备,迎接可能出现的挑战和机遇。文章建议人们应该珍惜现在,享受人类独有的体验,并采取措施,降低自身风险,为未来做好准备。

🚀 **超级智能将改变人类历史的进程:**超级智能的出现将标志着人类历史的一个重要转折点。文章认为,超级智能有可能帮助人类解决许多难题,但也可能带来新的挑战。人类需要谨慎地利用超级智能,并确保它能够为人类服务。

🧬 **超级智能将改变人类的定义:**文章指出,超级智能的出现可能改变我们对人类的定义。超级智能有可能超越人类的智能水平,甚至改变人类的生物属性。人类需要思考,在超级智能时代,人类的意义是什么?

Published on October 20, 2024 4:43 PM GMT

 

Superintelligence Is On The Horizon

It’s widely accepted that powerful general AI, and soon after, superintelligence, may eventually be created.[1] There’s no fundamental law keeping humanity at the top of the intelligence hierarchy. While there are physical limits to intelligence, we can only speculate about where they lie. It’s reasonable to assume that even if we hit an S-curve in progress, that plateau will be far beyond anything even 15 John von Neumann clones could imagine.

Gwern was one of the first to recognise the "scaling hypothesis"; others followed later. While debate continues over whether scaling alone will lead to AI systems capable of self-improvement, it seems likely that scaling, combined with algorithmic progress and hardware advancements, will continue to drive progress for the foreseeable future. Dwarkesh Patel estimates a "70% chance scaling + algorithmic progress + hardware advances will get us to AGI by 2040". These odds are too high to ignore. Even if there are delays, superintelligence is still coming.

Some argue it's likely to be built by the end of this decade; others think it might take longer. But almost no one doubts that AGI will emerge this century, barring a global catastrophe. Even skeptics like Yann LeCun predict AGI could be reached in “years, if not a decade.” As Stuart Russell noted, estimates have shifted from “30-50 years” to “3-5 years.

Leopold Aschenbrenner calls this shift "AGI realism." In this post, we focus on one key implication of this view—leaving aside geopolitical concerns:

We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn’t some random community of coders writing an innocent open source software package; this isn’t fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it’ll be the most important thing we ever do. 

Of course, this could be wrong. AGI might not arrive until later this century, though this seems increasingly unlikely. Nevertheless, it’s a future we must still consider.

Even in a scenario where AGI arrives late in the century, many of us alive today will witness it. I was born in 2004, and it’s more probable than not that AGI will be developed within my lifetime. While much attention is paid to the technical, geopolitical, and regulatory consequences of short timelines, the personal implications are less often discussed.

All Possible Views About Our Lifetimes Are Wild

This title riffs on Holden Karnofsky's post "All Possible Views About Humanity's Future Are Wild." In essence, either we build superintelligence—ushering in a transformative era—or we don't. We may see utopia, catastrophe, or something in between. Perhaps geopolitical conflicts, like a war over Taiwan, will disrupt chip manufacturing, or an unforeseen limitation could prevent us from creating superhuman intelligence. Whatever the case, each scenario is extraordinary. Arguably, no view of our future is "tame." There is no non-wild view.

Personally, I want to be there to witness whatever happens, even if it’s the cause of my demise. It seems only natural to want to see the most pivotal transition since the emergence of intelligent life on Earth. Will we succumb to Moloch? Or will we get our act together? Are we heading toward utopia, catastrophe, or something in between?

The changes described in Dario Amodei's "Machines of Loving Grace" paint a picture of what a predominantly positive future of highly powerful AI systems could look like. As he says in the footnotes, his view may even be perceived as "pretty tame":

“I do anticipate some minority of people’s reaction will be “this is pretty tame”. I think those people need to, in Twitter parlance, “touch grass”. But more importantly, tame is good from a societal perspective. I think there’s only so much change people can handle at once, and the pace I’m describing is probably close to the limits of what society can absorb without extreme turbulence.”

To be clear, what Dario describes as being perceived as "tame" already includes:

AI researcher Marius Hobbhahn speculates that the leap from 2020 to 2050 could be as jarring as transporting someone from the Middle Ages to modern-day Times Square, exposing them to smartphones, the internet, and modern medicine.

Or, as Leopold Aschenbrenner points out, we might see massive geopolitical turbulence.

Or, in Eliezer Yudkowsky’s view, we face near-certain doom.

Regardless of which scenario you find most plausible, one thing is abundantly clear: all possible views about our lifetimes are wild.


What Does This Mean On A Personal Level?

It’s dizzying to think that you might be alive when the 24th century comes crashing down on the 21st. If your probability of doom is high, you might be tempted to maximise risk—if you enjoy taking risks—since there would seem to be little to lose. However, I would argue that if there’s even a small chance that doom isn’t inevitable, the focus should be on self-preservation. Imagine getting hit by a truck just years or decades before the birth of superintelligence.

It makes sense to fully embrace your current human experience. Savor love, emotions—positive and negative—and other unique aspects of human existence. Be grateful. Nurture your relationships. Pursue things you intrinsically value. While future advanced AI systems might also have subjective experiences, for now, feeling is something distinctly human.

For better or for worse, no part of the human condition will remain the same after superintelligence. Biological evolution is slow, but technological progress has been exponential. The modern world itself emerged in the blink of an eye. If we survive this transition, superintelligence might bridge the gap between our biological limitations and technological capabilities.

The best approach, in my view, is to fully experience what it means to be human while minimising your risks. Avoid unnecessary dangers—reckless driving, water hazards, falls, excessive sun exposure, and mental health neglect. Look both ways when crossing the street. Focus on becoming as healthy as possible.[2] 

This video provides a good summary of how to effectively reduce your risk of death.

Maybe reading science fiction – series like The Culture by Iain Banks– is a good way to prepare for what’s coming.[3] Alternatively, some may prefer to stay grounded in present reality, knowing that the second half of this century might outpace even the wildest sci-fi. In ways we can’t fully predict, the future could be stranger than anything we imagine.

Holden Karnofsky has described a “call to vigilance” when thinking about the most important century. Similarly, I believe we should all adopt this mindset when considering the personal implications of AGI. The right reaction isn’t to dismiss this as hype or far-off sci-fi. Instead, it’s the realisation: “…oh… wow… I don’t know what to say, and I think I might vomit… I need to sit down and process this.”

To conclude: 

Utopia is uncertain, doom is uncertain, but radical, unimaginable change is not. 

We stand at the threshold of possibly the most significant transition in the history of intelligence on Earth—and maybe our corner of the universe.

Each of us must find our own way to live meaningfully in the face of such uncertainty, possibility, and responsibility. 

We should all live more intentionally and understand the gravity of the situation we're in.

It’s worth taking the time to seriously and viscerally consider how to live in the years or decades leading up to the dawn of superintelligence.

  1. ^

     For the purpose of this post, we’ll abide by the definition in DeepMind’s paper “Levels of AGI for Operationalizing Progress on the Path to AGI”

  2. ^

     Maybe you could argue getting maximally healthy isn’t that important, as in a best-case scenario for superintelligence, nearly all diseases and ailments would be solved. But still, it probably makes sense to hedge for mega-long timelines and stay healthy.

  3. ^


Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

超级智能 人工智能 未来 人类 技术
相关文章