少点错误 2024年07月09日
How bad would AI progress need to be for us to think general technological progress is also bad?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能 (AI) 发展与人类福祉之间的平衡问题。作者指出,虽然 AI 进步存在着巨大的风险,但它也推动了经济增长、科学进步和技术发展,这些对人类福祉至关重要。作者质疑,为了避免 AI 潜在的风险,是否应该放缓人类整体的科技进步,并探讨了这一问题背后的伦理和现实困境。

🤔 **AI 进步的风险与益处:** 文章指出,AI 进步存在着巨大的风险,例如导致人类灭绝的可能性,这在人工智能安全 (EA) 社区中被广泛认为是“X 风险”。但另一方面,AI 进步也推动了经济增长、科学进步和技术发展,这些对人类福祉至关重要。因此,作者提出了一个关键问题:我们是否应该为了避免 AI 风险而放缓人类整体的科技进步?

🚀 **科技进步与 AI 风险的权衡:** 文章强调,放缓科技进步将会影响人类的福祉,因为科技进步带来的益处远大于 AI 风险。作者认为,我们不应该为了避免 AI 风险而牺牲人类的整体福祉。然而,作者也承认,如果 AI 发展带来的风险超过了科技进步带来的益处,那么放缓科技进步可能是必要的。

⚖️ **寻找平衡点:** 文章没有给出明确的答案,而是提出了一个需要深入思考的问题:如何平衡 AI 进步带来的风险和益处?如何找到一个能够最大程度地保障人类福祉的平衡点?

🚧 **需要更多讨论:** 文章的最后呼吁更多人参与讨论,以找到一个能够平衡 AI 进步与人类福祉的解决方案。作者认为,我们需要认真思考 AI 发展带来的风险和益处,并找到一个能够最大程度地保障人类福祉的平衡点。

💡 **关键问题:** 如何在保障人类福祉的同时,控制 AI 发展带来的风险?

Published on July 9, 2024 10:43 AM GMT

It is widely believed in the EA community that AI progress is acutely harmful by substantially increasing X-risks. This has led to a growing priority on pushing against work advancing AI capabilities.[1]

On the other hand, economic growth, scientific advancements, and (non-AI) technological progress are generally viewed as highly beneficial, improving the quality of the future provided there are no existential catastrophes.[2]

But here’s the problem: contributing to this general civilizational progress that benefits humanity also substantially benefits AI researchers and their work.

My intuitive reaction here (and that of most, I assume) is maybe something like “yeah ok but surely this doesn’t balance out the benefits. We can’t tell the overwhelming majority of humans that we’re gonna slow down science, economic growth and improving their lives with these (and those of their descendants) until AI is safe just because these would also benefit a tiny minority that is making AI less safe”.

However, there has to be some threshold of harm (from AI development) beyond which we would think slowing down technological progress generally (and not only AI progress) would be worth it.

So what makes us believe that we’re not beyond this threshold?

  1. ^

    For example, on his 80,000 hours podcast appearance, Zvi Mowshowitz claims that it is "the most destructive job per unit of effort that you could possibly have". See also the recent growth of the Pause AI movement.

  2. ^

    For recent research and opinions that go in that direction, see Clancy 2023; Clancy and Rodriguez 2024.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 人类福祉 科技进步 AI 风险 平衡
相关文章