少点错误 2024年12月27日
Super human AI is a very low hanging fruit!
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文从生物学角度论证超人AI是极易实现的目标。作者认为,人类大脑在能量和物质消耗上的微小需求,以及人类并非为高级数学推理而进化等事实,表明超人智能在理论上是可行的。文章还探讨了超人AI可能面临的障碍,如智能与服从的权衡、高昂的研发成本等。此外,文章对比了超人AI与人脑模拟、长寿技术和太空殖民等未来技术,认为后者在物理限制上可能难以实现。最后,作者强调了在超人AI时代,人类与AI和谐共处的重要性。

🧠人类大脑的能耗和物质需求极低,这证明了人类水平的智能可以用极少的资源实现。既然如此,如果投入更多资源,超人智能的实现是水到渠成的。

🧮人类并非为进行微积分、计算机编程等复杂数学推理而进化,这使得人类在数学能力上存在巨大差异。即使大脑只有3磅,如果进化压力足够强,人类的数学天赋也可以远超陶哲轩。

🧬进化并非高效的优化过程,更像是随机搜索。基因组作为元参数决定了人类如何通过环境学习,这种方式可能不如直接优化突触有效,但优化空间也很大。

🤖超人AI可能面临一些障碍,如AI的服从性问题、研发成本过高、硬件瓶颈以及社会崩溃等,但这些障碍并非不可逾越。至少需要两个以上的障碍同时出现,才有可能阻止超人AI的出现。

Published on December 26, 2024 7:00 PM GMT

This is a bit of a rough draft. I would appreciate any constructive comments especially important arguments that I may have overlooked.

§ 1. Introduction

Summary. I argue, from the perspective of biology, that super human AI is a very low hanging fruit. I believe that this argument is very solid. I briefly consider reasons why {super human AI}/{vastly super human AI} might not arise. I then contrast AI with other futurist technologies like human brain emulation, radical life extension & space colonization. I argue that these technologies are in a different category & plausibly impossible to achieve in the way commonly envisioned. This also has some relevance for EA cause prioritization.

In my experience certain arguments are ignored b/c they are too straightforward. People have perhaps heard similar arguments previously. The argument isn't as exciting. B/c people understand the argument they feel empowered to disagree with it which is not necessarily a bad thing. They may believe that complicated new arguments, which they don't actually understand well, have debunked the straightforward arguments even tho the straightforward argument maybe easily salvageable or not even debunked in the 1st place!

§ 2. Super human AI is a very low hanging

The reasons why super human AI is a very low hanging fruit are pretty obvious.

1) The human brain is meager in terms of energy consumption & matter. 2000/calories per day is approximately 100 watts. Obviously the brain uses less than that. Moreover the brain is only 3 pounds.

So we know for certain that human level intelligence is possible with meager energy & matter requirements. It follows that super human intelligence should be achievable especially if we're able to use orders of magnitude more energy & matter which we are.

2) Humans did not evolved to do calculus, computer programming & things like that.

Even Terence Tao did not evolve to do complicated math. Of course you can nitpick this to death by saying humans evolved to do many complex reason tasks. But we didn't actually evolve to do tasks requiring such high levels of mathematical reasoning ability. This is actually why there's such a large variability in mathematical intelligence. Even with 3 pound brains we could all have been as talented as (or even far more talented than) Terence Tao had selective pressure for such things been strong.

3) Evolution is not efficient.

Evolution is not like gradient descent. It's a bit more like Nelder Mead. Much of evolution is just purging bad mutations & selection on standing diversity in response to environmental change. A fitness enhancing gain of function mutation is a relatively very rare thing.

A) Evolution does not act at the level of synapse.

The human genome is far far too short. Instead the genome acts as metaparameters that determine human learning in response to the environment. I think this point cuts both ways which is why I'm referring to it as A not 4. Detailed analysis of this point is far beyond the scope of this post. But I'm inclined to believe that such an approach is maybe not quite as inefficient as Nelder Mead applied at the level of synapse but more limited in its ability to optimize.

§ 3. Possible obstacles to super human AI

I see only a few reasons why super human AI might not happen.

1) An intelligence obedience tradeoff. Obviously companies want AI to be obedient. Even a harmless AI which just thinks about incomprehensible AI stuff all day long is not obviously a good investment. It would be the corniest thing ever if humans' tendency to be free gave us an insurmountable advantage over AI. I doubt this is the case, but it wouldn't be surprise if there is some (not necessarily insurmountable) intelligence obedience tradeoff.

2) Good ideas are not tried b/c of high costs. I feel like I have possibly good ideas about how to train AI, but I just don't have the spare 1 billion dollars.

3) Hardware improvements hit a wall.

4) Societal collapse.

Realistically I think at least 2 of these are needed to stop super human AI.

§ 4. Human brain emulation

In § 2 I argue that super human AI is quite an easy task. Up until quite recently I would some times encounter claims that human brain emulation is actually easier than super human AI. I think that that line of thinking puts somewhat too much faith in evolution. The problem with human brain emulation is that the artificial neural network would need to model various peculiarities & quirks of neurons. An easy & efficient way for a neuron to function is not necessarily that easy & efficient for an artificial neuron & vice verse. Adding up a bunch of things & putting that into ReLu is obviously not what a neuron does, but how complex would that function need to be to capture all of a neuron's important quirks? Some people seem to think that the complexity of this function would match it's superior utility relative to an artificial neuron [N1]. But this is not the case; the neuron is simply doing what is easy for a neuron to do; likewise for the artificial neuron. Actually the artificial neuron has 2 big advantages over the neuron -- the artificial neuron is easier to optimize and it is not spatially constrained.

If human brain emulation is unavailable, then a certain vision of mind uploading is impossible. But an AI copying aspects of a person's personality, like the Magi system from that TV show, is not some thing that I doubt [N2].

N1. I've some times heard the claim that a neuron is more like a MLP. I would go so far as to claim that an artificial neuron with input from k artificial neurons & using a simple activation function like ReLu & half precision is going to be functionally superior to a neuron with input from k neurons b/c of greater optimization & lack of spatial constraints. But simulating the latter is going to be way more difficult.

N2. That an AI could remember details of your life better than you could and in that sense be more you than you could possibly be is also worth noting.

§ 5. Radical life extension & space colonization

Life spans longer than human are definitely possible & have been reported for bowhead whales, Greenland sharks & a quahog named Ming. But the number of genetic changes necessary for humans to have such long life spans is probably high. And it's unclear whether non genetic interventions will be highly effective given the centrality of DNA in biology.

The energy costs of sending enough stuff into space to bootstrap a civilization are alone intimidating. Perhaps advances like fusion or improved 3D printing will solve this problem.

§ 6. Conclusions

Won't super human AI make it all possible?

I'm not claiming that human brain emulation, radical life extension & space colonization are definitely impossible.

But in the case of super human AI we're merely trying to best some thing that is definitely possible & with meager physical inputs.

On the other hand human brain emulation, radical life extension & space colonization may be possible or they may be too physically constrained ie constrained by the laws of physics.

What is the significance of this beyond just the technical points? I'm not proposing that people preemptively give up on these goals. Some elements of human brain emulation will not require the simulation to be accurate at the neuronal level. Radical life extension via genetics seems in principle achievable but maybe not desirable or worthwhile. My point is that a future with {super human AI}/{vastly super human AI} seems likely. But the time lag between vastly super human AI & those other technologies may be very substantial or infinite. Hence the importance of AI & humans living harmoniously & happily during that extended period of time is possibly paramount [N3] & the required cultural & political changes for such a coexistence are likely substantial.

N3. If things progress in a pleasant direction, this could be an opportunity for humans to have more free time with AI doing most (but not all) of the work.

Hzn



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

超人AI 生物学 人工智能 人脑模拟 未来科技
相关文章