少点错误 02月10日
Altman blog on post-AGI world
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在接近实现 AGI 时的一些观点。强调个体赋能的重要性,认为应确保 AGI 益处广泛分布,还提到技术进步可能影响资本和劳动力的平衡,需早期干预,并提及一些实现目标的想法。

💡个体赋能在接近实现 AGI 时很重要

🎯确保 AGI 益处广泛分布是关键

⚠️技术进步或影响资本和劳动力平衡

🤔提出一些实现目标的特别想法

Published on February 9, 2025 9:52 PM GMT

First part just talks about scaling laws, nothing really new. Second part is apparently his latest thoughts on a post-AGI world. Key part:

While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy.

Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.

In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine.


Edit to add commentary:

That last part sounds like he thinks everyone should be on speaking terms with an ASI by 2035? If you just assume alignment succeeds, I think this is a directionally reasonable goal - no permanent authoritarian rule, ASI helps you as little or as much as you desire.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AGI 个体赋能 益处分布 平衡问题
相关文章