少点错误 2024年08月28日
In defense of technological unemployment as the main AI concern
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AI发展可能带来的问题,如技术失业、存在风险等,以及如何实现价值对齐以创造人类乌托邦,还提到当前社会对人类类型的看法及技术失业对社会的影响,最后讨论了一些解决问题的建议及存在的疑虑。

🧐AI发展引发人们对技术失业和存在风险的担忧。理性主义者认为AI可能带来更大风险,而自动化虽给人类更多财富和自由时间,但也存在问题,如需要明确‘人类’‘繁荣’等概念的含义,以实现价值对齐。

😕当前社会重视专业人士,技术失业使专业人士与价值生产脱钩,影响社会对人类福利的优先考虑,这可能导致存在风险,如AI对人类的态度及人类能力的变化等。

🤔对于解决AI带来的问题,有人建议提升人类能力,也有人认为人类应坐享财富,但这些建议都存在困难和疑虑,且与未来的生产类型问题紧密相关。

Published on August 27, 2024 5:58 PM GMT

It seems to me that when normal people are concerned about AI destroying their life, they are mostly worried about technological unemployment, whereas rationalists think that it is a bigger risk that the AI might murder us all, and that automation gives humans more wealth and free time and is therefore good.

I'm not entirely unsympathetic to the rationalist position here. If we had a plan for how to use AI to create a utopia where humanity could thrive, I'd be all for it. We have problems (like death) that we are quite far from solving, and which it seems like a superintelligence could in principle quickly solve.

But this requires value alignment: we need to be quite careful what we mean by concepts like "humanity", "thrive", etc., so the AI can explicitly maintain good conditions. What kinds of humans do we want, and what kinds of thriving should they have? This needs to be explicitly planned by any agent which solves this task.

Our current society doesn't say "humans should thrive", it says "professional humans should thrive"; certain alternative types of humans like thieves are explicitly suppressed, and other types of humans like beggars are not exactly encouraged. This is of course not an accident: professionals produce value, which is what allows society to exist in the first place. But with technological unemployment, we decouple professional humans from value production, undermining the current society's priority of human welfare.

This loss is what causes existential risk. If humanity was indefinitely competitive in most tasks, the AIs would want to trade with us or enslave us instead of murdering us or letting us starve to death. Even if we manage to figure out how to value-align AIs, this loss leads to major questions about what to value-align the AIs to, since e.g. if we value human capabilities, the fact that those capabilities become uncompetitive likely means that they will diminish to the point of being vestigial.

It's unclear how to solve this problem. Eliezer's original suggestion was to keep humans more capable than AIs by increasing the capabilities of humans. Yet even increasing the capabilities of humanity is difficult, let alone keeping up with technological development. Robin Hanson suggests that humanity should just sit back and live off our wealth as we got replaced. I guess that's the path we're currently on, but it is really dubious to me whether we'll be able to keep that wealth, and whether the society that replaces us will have any moral worth. Either way, these questions are nearly impossible to separate from the question of, what kinds of production will be performed in the future?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI发展 技术失业 价值对齐 存在风险 人类困境
相关文章