少点错误 2024年11月24日
Compute and size limits on AI are the actual danger
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了人工智能发展中一个关键问题:是通过增加模型规模和算力来提升智能,还是通过构建更高级别的抽象能力来实现突破。文章指出,虽然目前大型语言模型主要依靠规模扩张来提升性能,但这种方式可能并非最优解。人类智能的发展则更多依赖于抽象思维能力的提升,这使得人类能够在有限的脑容量下处理复杂问题。文章进一步探讨了抽象能力的构建过程、成本以及对人工智能发展的影响,并分析了加州SB 1047法案对人工智能发展方向的潜在影响。最终,文章推测,当人工智能模型的抽象能力达到一定水平时,可能会出现类似于“奇点”的快速发展阶段,需要引起关注。

🤔 **智能与抽象的关系:**文章认为智能水平与世界模型的抽象程度密切相关,人类之所以比其他动物更聪明,是因为人类拥有更强的抽象思维能力,能够识别更高层次的模式。

🧠 **抽象的本质与成本:**抽象实质上是一种有损压缩,构建良好的抽象需要消耗大量资源。当资源充足时,扩大模型规模比构建更高级别的抽象更经济。

🚧 **构建抽象的难度:**文章强调构建优秀的抽象非常困难,这也是像爱因斯坦、道金斯和纳什等杰出科学家与常人之间的区别所在。

🤖 **AI发展现状:**当前AI模型主要通过增加规模和训练数据来提升性能,类似于“大象”模式,而非人类的“抽象”模式。

⚠️ **潜在风险:**文章指出,当人工智能模型的抽象能力达到一定水平时,可能会出现类似于“奇点”的快速发展阶段,需要警惕由此带来的潜在风险。

Published on November 23, 2024 9:29 PM GMT

 Epistemic status: rather controversial and not very well researched :) Not super novel, I assume, but a cursory look did not bring up any earlier posts, please feel free to link some.

Intuition pump: bigger brain does not necessarily imply a smarter creature. Apes are apparently smarter than elephants and dolphins appear smarter than blue whales. There is definitely a correlation, but the relationship is far from certain.

Starting point: intelligence is roughly equivalent to the degree of abstraction of the world models (detecting Dennett's "real patterns", at increasingly higher level). Humans are much better at abstract thought than other animals, and one can trace the creature's ability to find higher-level patterns in the world (including themselves) with higher intelligence throughout the natural and artificial world.

A non-novel point: Abstraction is compression. Specifically, abstraction is nothing but a lossy compression of the world model, be it the actual physical world, or the world of ideas.

An obvious point: generating good abstractions is expensive. If you have enough resources to use your existing mental capacity, there is no reason to expend resources on generating better abstractions. If you have room to grow your brain to add more of the same-level patterns, it is cheaper than building better abstractions in the same brain size. 

A less obvious point is how hard building good abstractions is. This is what theoretical research is and what separates the likes of Einstein, Dawkins and Nash from the rest of us.

An implication: size and compute restrictions while facing the need to cope with novel situations facilitate abstraction building.

A just so story: human brain size is (currently) constrained by the head size, which is constrained by the hip size due to having to walk upright, which is constrained by the body mass due to resource availability and, well, gravity, resulting in abstraction building being a good way to deal with the changing environment.

Current AI state: the LLMs now get smarter by getting larger and training more. There are always compute and size pressures, but they are not hard constraints, more like costs. Growing to get more successful, the elephant way, not the human way, seems like a winning strategy at this point.

Absolute constraints spark abstraction building: the vetoed California bill SB 1047 "covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million. If a covered model is fine-tuned using more than $10 million, the resulting model is also covered" according to  the Wikipedia. Should the bill had been signed, it would have created severe enough pressures to do more with less to focus on building better and better abstractions once the limits are hit.

A speculation: much better abstractions smooth out the "jagged frontier" and reduce or eliminate the weak spots of the current models, which is jumping from "rule interpolation" (according to François Chollet) to "rule invention", something he and other skeptics point out at as the weakness of the current models.

The danger: once the jagged frontier is smooth enough to enable "rule invention", we get to the "foom"-like zone Eliezer has been cautioning about. 

Conclusion: currently it does not look like there are skull-and-hip-size restrictions on AI, so even with the next few frontier models we are probably not at the point where the emerging abstraction level matches that of (smartest) humans. But this may not last.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 抽象 大型语言模型 奇点 规模扩张
相关文章