少点错误 01月13日
Cast it into the fire! Destroy it!
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了AGI的使用问题,认为人类无需AGI来实现诸多目标,AGI存在诸多危险,人类应封印其力量。同时提到人类需时间调整道德观念以适应现实变化,还提及了一些思想家的警告。

🎯人类无需AGI来创造后稀缺社会等,靠自身努力可实现目标。

⚠️AGI存在诸多危险,如使人道德观念瓦解,能力超越道德等。

💡人类应封印AGI的力量,让其永不再被使用,这并非易事。

🌌可能存在外星文明正确处理了AGI问题,人类或可借鉴。

Published on January 13, 2025 7:30 AM GMT

We should only use AGI once to make it so that no one, including ourselves, can use it ever again.

I'm terrified of both getting atomized by nanobots and of my sense of morality disintegrating in Extremistan. We don't need AGI to create a post-scarcity society, cure cancer, solve climate change, build a Dyson sphere, colonize the galaxy, or any of the other sane things we're planning to use AGI for. It will take us hard work and time, but we can get there with the power of our own minds. In fact, we need that time to let our sense of morality adjust to our ever-changing reality. Even without AGI, most people already feel that technological progress is too fast for them to keep up.

Some of the greatest thinkers and writers of humanity have warned us of the danger and seductiveness of unlimited power. Take this passage from Tolkien and tell me it doesn't sound like most of the people you've heard talk about the wonderful things they're planning to do with AGI:

Already the Ring tempted him, gnawing at his will and reason. Wild fantasies arose in his mind; and he saw Samwise the Strong, Hero of the Age, striding with a flaming sword across the darkened land, and armies flocking to his call as he marched to the overthrow of Barad-dir. And then all the clouds rolled away, and the white sun shone, and at his command the vale of Gorgoroth became a garden of flowers and trees and brought forth fruit. He had only to put on the Ring and claim it for his own, and all this could be.

Lovecraft warned us of what would happen when our abilities outpaced our morality, when we ourselves would become powerful like cosmic horrors:

The time would be easy to know, for then mankind would have become as the Great Old Ones; free and wild and beyond good and evil, with laws and morals thrown aside and all men shouting and killing and revelling in joy. Then the liberated Old Ones would teach them new ways to shout and kill and revel and enjoy themselves, and all the earth would flame with a holocaust of ecstasy and freedom.

Humanity has nothing to gain from AGI, and everything to lose. We don't need AGI to have human values or to follow instructions in a friendly manner. We just need to figure out that one command to seal off that power forever - without disassembling ourselves in the process.

If Geoffrey Hinton, Elizier Yudkowsky, and other top AI researchers are wrong about the power and dangers of AGI, then the AGI will probably be incapable of following the command to the extent we imagine anyway.

On the other hand, if those researchers are right, only then will humanity understand the depth of the precipice upon which it stood. It's one thing to listen to experts talk about hypothetical future dangers, another to see hundred-billion-dollar distributed computers inexplicably turned into paperweights. Few will be able to deny the reach of the power and of the danger then. Humanity will survive, and hopefully recognize that there really are "seas of black infinity" out there that we may never be ready to touch.

If you, dear reader, have a chance of being that first person to give a command to a superintelligence, don't be an Isildur. Unlimited power won't do any good for you or for anyone else, and it was not meant for us to bear. If you can, seal that power and free humanity from the fear of eternal death and eternal nightmare.

Of course, making an AGI make sure AGI never gets used again is easier said than done, and even this seemingly simple problem seems to be on the same order of difficulty as alignment in general, and just as likely to get us all disassembled if we screw it up. Still, this is the problem AI alignment researchers should be focused on.

One silver lining here is that there's a possibility that we may be within the light cone of an alien civilization that actually got this right, so their "anti-AGI AGI" is here in our solar system, and we'll just get to laugh as Microsoft admits that it can't turn Stargate on and then go on to live our normal lives.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AGI 人类发展 道德观念 潜在危险
相关文章