少点错误 2024年07月10日
Pondering how good or bad things will be in the AGI future
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AGI发展的可能性及影响,提到若LLMs未发明、AGI不可能时,世界仍可能进步且实现可持续发展目标。同时指出AI发展可能带来的风险,如人类灭绝等,且AGI的未来难以想象,人们对其准备不足。

🎯若LLMs未被发明,AI寒冬持续,AGI普遍被认为不可能,世界仍可能在其他方面取得进步,甚至有可能实现所有可持续发展目标,但也存在一些问题。

💥AI发展可能导致人类灭绝等风险,一些人认为这种风险可能是快速且不易察觉的,也可能是漫长而不愉快的灾难,这些风险不容忽视。

🤔AGI的未来难以想象,虽然人们期望一个没有痛苦的世界,但目前尚未找到一个既可能又有希望的AGI未来具体场景,人们对在AGI未来的生活也感到迷茫。

😕AI公司对发展方向缺乏清晰愿景,记者也未深入询问,人们对AGI的发展存在担忧,且对如何应对准备不足。

Published on July 9, 2024 10:46 PM GMT

Yesterday I heard a podcast where someone said he hoped AGI would be developed in his lifetime. This confused me, and I realized that it might be useful - at least for me - to write down this confusion.

Consider that for some reasons - different history, different natural laws, whatever - LLMs had never been invented, the AI winter had taken forever, and AGI would generally be impossible. Progress would still have been possible in this hypothetical world, but without whatever is called AI nowadays or in the real-world future.

Such a world seems enjoyable. It is plausible that technological and political progress might get it to fulfilling all Sustainable Development Goals. Yes, death would still exist (though people might live much longer than they currently do). Yes, existential risks to humanity would still exist, although they might be smaller and hopefully kept in check. Yes, sadness and other bad feelings would still exist. Mental health would potentially fare very well in the long term (but perhaps poorly in the short term, due to smartphones or whatever). Overall, if I had to choose between living in the 2010s and not living at all, I think the 2010s were the much better choice, as were the 2000s and the 1990s (at least for the average population in my area). And the hypothetical 2010s (or hypothetical 2024) without AGI could still develop into something better.

But what about the actual future?

It clearly seems very likely that AI progress will continue. Median respondents to the 2023 Expert Survey on Progress in AI "put 5% or more on advanced AI leading to human extinction or similar, and a third to a half of participants gave 10% or more". Some people seem to think that the extinction event that is expected with some 5% or whatever in the AI catastrophe case is some very fast event - maybe too fast for people to even realize what is happening. I do not know why that should be the case; a protracted and very unpleasant catastrophe seems at least as likely (conditional on the extinction case). So these 5% do not seem negligible.[1]

Well, at least in 19 of 20 possible worlds, everything goes extremely well because we have a benevolent AGI then, right?

That's not clear, because an AGI future seems hard to imagine anyway. It seems so hard to imagine that while I've read a lot about what could go wrong, I haven't yet found a concrete scenario of a possible future with AGI that strikes me as both a likely future and promising.

Sure, it seems that everybody should look forward to a world without suffering, but reading such scenarios, they do not feel like a real possibility, but like a fantasy. A fantasy does not have to obey real-world constraints, and that does not only include physical limitations but also all the details of how people find meaning, how they interact and how they feel when they spend their days.

It is unclear how we would spend our days in the AGI future, it is not guaranteed that "noone is left behind", and it seems impossible to prepare. AI companies do not have a clear vision where we are heading, and journalists are not asking them because they just assume that creating AGI is a normal way of making money

Do I hope that AGI will be developed during my lifetime? No, and maybe you are also reluctant about this, but nobody is asking you for your permission anyway. So if you can say something to make the 95% probability mass look good, I'd of course appreciate it. How do you prepare? What do you expect your typical day to be like in 2050?

 

  1. ^

    Of course, there are more extinction risks than just AI. In 2020, Toby Ord estimated "a 1 in 6 total risk of existential catastrophe occurring in the next century". 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AGI AI发展 风险担忧 未来想象
相关文章