少点错误 2024年07月23日
How to avoid death by AI.
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能安全风险,作者认为,为了避免AI带来的潜在威胁,我们需要建立一个去中心化的社会推理平台,让每个人都能参与到学习和验证信息的过程中。平台通过奖励机制,激励用户参与,并提升集体智慧,从而有效地应对AI安全挑战。

🤔 **AI安全风险:** 文章指出,如果我们继续大规模发展机器学习系统,人类将面临灭亡的风险。作者认为,解决这个问题的关键在于让每个人都理解AI安全的重要性,并积极参与到解决问题的过程中。

💰 **奖励机制:** 作者提出,通过建立一个去中心化的社会推理平台,并为用户提供经济激励,可以鼓励更多人参与到学习和验证信息的过程中。这种机制可以有效地提高集体智慧,并促进对AI安全问题的理解和解决。

🧠 **集体智慧:** 文章强调,集体智慧与人工智能是不同的概念。集体智慧是指通过协作和互动,将大量个体智慧汇聚在一起,解决复杂问题。作者认为,建立一个去中心化的社会推理平台,可以有效地利用集体智慧,应对AI安全挑战。

🚀 **平台的意义:** 作者认为,建立一个去中心化的社会推理平台,可以帮助人们更好地理解AI安全问题,并找到有效的解决方案。它可以成为一个重要的工具,帮助人类应对AI带来的挑战,并确保人类的未来。

☄️ **警钟:** 作者以电影“不要抬头”为例,指出如果人类没有建立一个有效的社会推理平台,就无法有效地应对危机。这表明,建立一个能够促进集体智慧的平台,对于人类的未来至关重要。

Published on July 23, 2024 1:59 AM GMT

If we continue to scale ML systems, we will all perish.

If you disagree with that, you should probably be reading Eliezer's work instead.

If you are caught up on his work, then I have something new for you to think about.

 

One solution for solving this problem, would be to teach everyone on the planet the full set of reasons Eliezer holds this position.  That would be the 'humanity grows up and decides not to build AI' possible future.

That seems like an intractable task.  Most people do not care about those reasons.  They don't care about learning what they can do to alter the trajectory of our future.  They have no incentive to comprehend what you are wanting them to comprehend.  We can't force knowledge into people heads.

How on Earth could we possible educate so many people about such a nuanced topic? 

How could we verify that they really understand?

You give them something they do care about, money.

If you pay individuals to prove they understand something (in a way that works) you create a function that takes money as input and outputs public comprehension of the topics that are incentivized.

That's something we really need right now.  It would serve the function of a 'fire alarm' not only for Eliezer's message, but for any information any person wants another person to consider.

This can be done by constructing a decentralized collective intelligence that rewards individuals for using it.

If you aren't familiar with collective intelligence, do not mistake it for artificial intelligence.

It is a completely different field in computer science.  (https://cci.mit.edu/)

It is a paradigm shift from a self contained intelligent system that can evolve beyond us to a system that has humans as its parts and requires our interactions to grow.

It's a shift from trying to get a machine to twirl a pencil all by itself to getting a machine that can coordinate billions of people to solve much more complex problems.

That's intelligence also.

There are projects in this spirit (Community Notes, Wikipedia, Research Hub, predictive markets, Anthropic/collectiveintelligenceproject), but they fall short.

What we actually need is a place where people intentionally go to receive an education, read the news, verify the truth and get rewarded financially for it (without needing to invest capital).

I think people really underestimate the effect such a mechanism would have on society.

Or they just haven't thought about it carefully enough.

 

 

I believe I have an algorithm (from work on a gofai project from 2010) that will scale the effectiveness of collective intelligence orders of magnitude.  Unfortunately, it would also help LLMs reason more effectively and possible piss off intelligence agencies as much as Bitcoin pissed off the banks, so it's not online anywhere.

I am posting this on LW because I don't currently have a way to stamp my intellectual property onto a blockchain in such a way that only Eliezer can see it and I can pay him to demonstrate that he has thought about it.

It wasn't the meteor that killed everyone in "Don't look up.", it was the fact that they hadn't yet build a functional decentralized social reasoning platform where everyone earned their living by learning and verifying things.  If they would have built that, they could have communicated the problem, negotiated solutions and survived.  That's the lesson we were supposed to draw from that.

If we would have built what I'm talking about back in 2010 when I came up with it, then billions of people would share Eliezer's concerns today.

That's what we need to build.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 集体智慧 去中心化 社会推理平台
相关文章