少点错误 07月10日 21:57
How wide is "human-level" intelligence?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了人类智能在计算量上的范围,通过建立一个简化的模型来估算从普通人到世界级天才的“有效神经元数量”差异。作者基于神经元数量的差异与智力差异相关的假设,结合颅内容积和智力测试表现的数据,推算出世界级天才的“有效神经元数量”大约是普通人的6倍。文章进一步讨论了这种差异转化为计算量的可能性,并指出如果人类大脑的 scaling 与 Transformer 类似,那么从 AGI 到 ASI 的时间可能只有几年。

🧠作者以人类大脑皮层神经元数量为基准,试图量化人类智力差异。通过分析文献,作者指出,神经元数量的差异可以解释智力差异的一部分。

📏为了简化模型,作者假设其他因素对智力差异的影响可以忽略不计。通过计算,如果智力差异完全由神经元数量差异解释,那么神经元数量的标准差需要扩大4.47倍。

💡基于上述假设,作者估算了从普通人到世界级天才的“有效神经元数量”差异。结果显示,这种差异大约是6倍。作者进一步讨论了这种差异转化为计算量的可能性,并指出如果人类大脑的 scaling 与 Transformer 类似,那么从 AGI 到 ASI 的时间可能只有几年。

Published on July 10, 2025 11:51 AM GMT


I'm interested in estimating how many 'OOMs of compute' span the human range. There are a lot of embedded assumptions there, but let's go with them for the sake of a thought experiment.

Cortical neuron counts in humans have a standard deviation of 10 - 17%, depending on which source you use. Neuron counts are a useful concrete anchor that I can relate to AI models. 

There are many other factors that account for intelligence variation among humans. I'd like to construct a toy model where those other factors are backed out. Put another way - if intelligence variation was entirely explained by neuron count differences, how much larger would the standard deviation of the neuron counts have to be to reproduce the same distribution we observe in intelligence?

From the literature, about 5-15% of the variance in intelligence test performance is attributable to intercranial volume differences. Intercranial volume differences also appear to be a reasonably close proxy for neuron count differences.

To be conservative, let's take the lower end (5%) as the variance in intelligence attributable to volume differences. The conclusions aren't sensitive to what you pick here.

Working this through, a neuron count standard deviation 4.47x larger would produce the same distribition, with the other sources of variaton removed.

Let's take the largest estimate of cortical neuron count standard deviation (17%) and inflate it by this multiplier. So our new standard deviation for "effective neuron count" is 76%

Next I want to estimate the gap between a median human and a world-historical genius. Let's take a population of 10b humans. Assuming Gaussianity, the maximum Z value you'd expect to observe in this sample is about 6.5.

So the maximum 'effective neuron count' for this hypothetical individual would be 1 + 0.76*6.5 = 5.9x

So roughly 6x "effective parameter count" spans the range from human median to world-historical genius. That's not very large.

There's no direct way to translate that into "OOMs of compute". But for what it's worth: for a Transformer, 6x the parameter count needs ~36x more compute for a loss-minimizing model. So we could say that if human brain scales on the same basis (probably wrong), rare human outliers would be equivalent to a model trained with 1.5 OOMs more FLOPs than baseline. That's slightly less than the gap from GPT-3 to GPT-4.

This is a toy model, it's easy to poke holes in, but I at least found the exercise interesting. It feels plausible, and would imply timelines from AGI to ASI of at most a few years on current trends.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人类智能 计算量 AGI ASI 神经元
相关文章