少点错误 01月31日
Can someone, anyone, make superintelligence a more concrete concept?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了公众对人工智能超智能的风险认知不足的问题,认为这主要是因为人们难以在情感层面理解其潜在威胁。作者指出,我们对超智能的概念模糊,难以产生与实际风险相匹配的情感反应。文章通过类比、场景模拟等方式,试图激发读者对超智能风险的感知,强调需要更有效地传递超智能的潜在威胁,从而促使人们采取相应的行动。文章还分析了人们在面对超智能时可能存在的认知偏差,如自负和否认,这些偏差也阻碍了人们理解和接受超智能的风险。

🤔 人们难以理解超智能风险,核心在于情感反应不足。我们对人类智能有具象认知,但对超智能的概念模糊不清,导致无法产生相应的危机感。

🚀 超智能并非完全“不可思议”,其发展路径可预测。例如,在工程领域,超智能可以轻易超越人类的现有技术水平,如制造更快的飞行器,或发现更多的网络漏洞。

⚠️ 我们需要学习对超智能产生恐惧,就像对其他潜在危险一样。通过类比和场景模拟,可以帮助人们在情感层面理解超智能的风险,而不是仅仅依赖逻辑推理。这需要更有效的沟通方式,引发人们对超智能风险的深刻认知和警觉。

Published on January 30, 2025 11:25 PM GMT

What especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response. - Sam Harris (NPR, 2017)

 

I've been thinking alot about why so many in the public don't care much about the loss of control risk posed by artificial superintelligence, and I believe a big reason is that our (or at least my) feeble mind falls short at grasping the concept. A concrete notion of human intelligence is a genious, like Einstein. What is the concrete notion of artificial superintelligence?

If you can make that feel real and present, I believe I, and others, can better respond to the risk.

The future is not unfathomable 

When people discuss the singularity, projection projections beyond that point often become "unfathomable." They resemble the form of: it will cleverly outmaneuver any idea we have, it will have its way with us, what happens next is TBD.  

I reject much of this, because we can see low-hanging fruit all around us for a greater intelligence. A simple example is the top speed of aircraft. If a rough upper limit for the speed of an object is the speed of light in air, ~299,700 km/s, and one of the fastest aircraft, NASA X-43 , has a speed of 3.27 km/s then we see there's a lot of room for improvement. Certainly a superior intelligence could engineer a faster one! Another engineering problem waiting to be seized: there's plenty of zero-day hacking exploits waiting to be uncovered with intelligence attention towards them.  

Thus, the "unfathomable" future is foreseeable to a degree, like we know that engineerable things could be engineered by a superior intelligence. Perhaps they will want things that offer resources, like the rewards of successful hacks.

We can learn new fears 

We are born with some innate fears, but many fears are learned. We learn to fear a gun because it makes a harmful explosion. We learn to fear a dog after it bites us. 

Some things we should learn to fear are just not observable with raw senses, like the spread of a flammable gas inside our homes. So a noxious scent gets added to make the risk observable at home, and allow us to react appropriately. I've heard many logical arguments about superintelligence risk, but my contention is that these arguments don't convey the adequate emotional message.  If your argument does nothing for my emotions, then it exists like a threatening but odorless gas—one that I fail to avoid because I don't readily detect it—so can you spice it up so that I understand on an emotional level the risk and requisite actions to take? I don't think that requires invoking esoteric science-fiction, because... 

Another power we humans have is the ability to conjure up a fear that is not present. Consider this simple thought experiment: First, envision yourself in a zoo watching lions. What's the fear level? Now envision yourself while placed inside the actual lion enclosure and the resulting fear level. Now envision a lion sprinting towards you while you're in the enclosure. Time to ruuunn! 

Isn't the pleasure of any media, really, how it stirs your emotions?  

So why can't someone walk me through the argument that makes me feel the risk of artificial superintelligence without requiring a reading a long tome, or getting transported to an exotic world of science-fiction? 

The appropriate emotional response

Sam Harris has said, "What especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response." As a student of the discourse, I believe that's true for most. 

I've gotten flack for saying this, but having watched many, many hours of experts discussing the existential risk of AI, I see very few who express what I view as an appropriate emotional response. I see frustration and the emotions of partisanship, but these exist with everything that becomes a political issue. 

To make things concrete, when I hear people discuss present job loss from AI or fears of job loss from AI, the emotions square more closely with my expectations. I do not criticize these folks so much. There's sadness from those impacted and a palpable anger from those trying to protect themselves. Artists are rallying behind copyright protections, and I'd argue it comes partially out of a sense of fear and injustice regarding the impact of AI on their livelihood.  I've been around illness, death, grieving. I've experienced loss. I find the expressions about AI and job loss congruent with my expectations. 

I think a huge, huge reason for the logic/emotion gap when it comes to the existential threat of artificial superintelligence is because the concept we're referring to is so poorly articulated. How can one address on an emotional level a "limitlessly-better-than-you'll-ever-be" entity in a future that's often regarded as unfathomable?  

People drop their 'pdoom' or dully express short-term "extinction" risk timelines (which isn't easily relatable on an emotional level), deep technical tangents on one AI programming technique vs another. I'm sorry to say but I find these expressions so poorly calibrated emotionally with the actual meanings of what they're discussing. 

Some examples that resonate, but why they're inadequate

Here are some of the best examples I've heard that try address the challenges I've outlined. 

When Yudkowsky talks about markets or Stockfish, he mentions how our existence in relation to them involves a sort of deference. While those are good depictions of the sense of powerlessness/ignorance/acceptance towards a greater force, they are lacking because they are narrow. Asking me, the listener, to generalize a market or Stockfish to every action is a step too far, laughably unrealistic. And that's not even me being judgmental, I think the exaggeration is so extreme that laughing is common! An easy rationalization is also to liken these to tools, and a tool like a calculator isn't so harmful.  

What also provokes fear for me are discussions of misuse risks, like the idea of a bad actor getting a huge amount of computing or robotics power to enable them to control our devices and police the public with surveillance and drones and such. But this example is lacking because it doesn't describe loss of control, and it also centers on preventing other humans from getting a very powerful tool. I think this example is part of the narrative fueling the AI arms race, because it suggests that a good actor has to get the power first to suppress misuse by bad actors. To be sure, it is a risk worth fearing and trying to mitigate, but... 

Where is such a description of loss of control?

A note on bias

I suspect that the inability to emotionally relate to supreintelligence is aided by a few biases: hubris and denial. I think it's common to feel a sort of hubris, like, if one loses a competition they tell themselves: "Perhaps I lost in that domain but I'm still best at XYZ and if I trained more I'd win."  

There's also a natural denial of death. We inch closer to it everyday, but few actually think about it and it's a difficult concept to accept even for those who have terminal disease. 

So, if one is reluctant to accept that something else is "better" than them out of hubris and reluctant to accept that death is possible out of denial, well that helps explain why superintelligence is also such a difficult concept to grasp. 

A communications challenge? 

So, please, can you make the concept of artificial superintelligence more concrete? Do your words arouse in a reader like me a fear on par with being trapped in a lion's den, without pointing towards a massive tome or asking me to invest in watching an entire Netflix series? If so, I think you'll be communicating in a way I've yet to see in the discourse. I'll respond in the comments to tell you why your example did or didn't register on an emotional level for me.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 超智能 情感认知 风险 沟通
相关文章