少点错误 9小时前
If your AGI definition excludes most humans, it sucks.
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了当前对通用人工智能(AGI)定义的普遍误解,指出许多标准过于严苛,甚至要求超越人类的能力,将AGI推向了遥不可及的境界。作者认为,AGI的定义应更贴近普通人类的平均水平,并区分了“强AGI”(能完成普通人能做的一切)和“弱AGI”(仅具备人类水平的推理能力)。文章强调,当前已存在“婴儿AGI”(Baby AGI),即具备一定新问题解决能力且接近人类水平的系统,这表明AGI的发展速度远超预期。作者警告,过于宽泛和极端的定义会混淆公众认知,延误对AGI潜在影响的重视,并区分了AGI与更高层级的“超人工智能”(ASI),认为一旦ASI失控,其后果可能不分强弱,都将是灾难性的。

🎯 **AGI定义存在过度拔高现象**:当前许多AGI的定义标准过于严苛,不仅要求系统能做人类能做的任何事,甚至包括只有天才才能完成的任务,以及超越人类的能力。这种标准将AGI描绘成一个遥不可及的里程碑,却忽略了其核心在于能够处理广泛的新颖问题。

🧠 **应以平均人类能力为基准**:作者提出,AGI的合理定义应至少包含普通人类的能力,甚至更应关注“平均人类”的水平。他区分了“强AGI”(系统能完成平均人类在计算机上能做的所有事情)和“弱AGI”(系统仅具备人类水平的推理能力,但可能缺乏长期记忆、感知和在线学习等能力),认为后者更接近实际的AGI目标。

👶 **“婴儿AGI”已现,发展速度惊人**:文章指出,一种被称为“婴儿AGI”的系统,即具备人类范围内的、针对新颖问题的解决能力(即使低于平均水平),目前已经存在。这表明AGI的进展比许多人想象的要快得多,并且“弱AGI”也似乎近在眼前。

📢 **清晰定义关乎公众认知与风险意识**:过于苛刻或模糊的AGI定义会造成公众认知混乱,尤其对普通大众而言,会低估AGI的实际发展速度和潜在影响。作者强调,明确的定义有助于社会更早地认识到AGI带来的机遇和挑战,以及可能面临的风险。

🚀 **区分AGI与ASI,警惕ASI失控风险**:文章还区分了AGI(通用人工智能)与ASI(超人工智能),认为ASI是指能力远超人类顶尖水平的系统。作者警告,一旦ASI(无论强弱)出现且与人类目标不一致,都可能带来毁灭性后果,因此不应等到AGI“碾压”我们时才开始重视。

Published on July 22, 2025 10:33 AM GMT

There are so many examples of insanely demanding AGI definitions[1] or criteria, typically involving, among other things, the ability to do something that only human geniuses can do. Usually, these criteria stem from a requirement that AGI be able to do anything any human can do. In extreme cases, people even require abilities that no humans have. I guess it's not AGMI (Artificial Gary Marcus Intelligence) unless it can multiply numbers of arbitrary size, solve the hard problem of consciousness, and remove a lightbulb from Gary Marcus's posterior, all simultaneously.

Defining AGI as something capable of doing anything a human can do on a computer naively sounds like requiring normal human-level ability. This isn't true. The issue is that there's a huge range of variation within human ability; many things that Einstein could do are totally beyond the ability of the vast majority of people. Requiring AGI to have the abilities of any human inadvertently requires it to have the abilities of a genius, creating a definition that has almost nothing to do with typical human ability. This leads to an accidental sleight-of-hand: AGI gets to be framed as a human-level milestone, then claimed to be massively distant because current models are nowhere near a threshold that almost no humans meet either. 

This is insane: AGI has ballooned into a standard that has almost nothing to do with being a general intelligence, capable of approaching a wide variety of novel problems. Almost all (maybe all, since no one has world-class abilities in all regards) of the quintessential examples of general intelligence – people – woefully fail to qualify.

Any definition of general intelligence should include at least the average human, and arguably most humans. Indeed, there's a legitimate question of whether it should require the ability to do all cognitive tasks average humans can do, or just average human-level reasoning ability. To clarify, I use strong AGI to describe a system that can do everything the average person can do on a computer, in contrast to weak AGI, which only has human level-reasoning, and might lack things like long-term memory, perception, "online learning" ability, etc. I'd use baby AGI to describe a system with novel problem-solving ability well into the human range, even if below average; this clearly exists today.

 

Use-words-sensibly justifications aside, these more modest definitions help show the deeply worrying rate of recent progress. Instead of the first major milestone term being a long way off and allowing us to forget how close we are to both a very impactful (though likely not earth-shattering) capability and a somber omen, we are confronted with tangible developments: baby AGI is no older than two[2], and weak AGI seems to be almost here.

There are also similar advantages for public awareness. As some of those examples linked in the first line show, absurdly demanding AGI definitions create equally absurd levels of confusion, even among people with a modest knowledge of the subject – they stand to do even more damage to the mostly uninformed public.

 

In response to all this, you might be thinking that a system capable of doing everything any (i.e. the smartest) humans can do is an even more important milestone, and deserves a clear label. We already have a term for thisASI. If you prefer to differentiate between a system on par with the best humans and one far above them, use weak ASI and strong ASI respectively. My strong suspicion is that, if either of these ever exists (within a reasonable compute budget) and is misaligned, we're going to get wrecked, and it likely doesn't matter much which one. The two are a natural category.

 If we keep saying "it's not AGI" until it flattens us, things aren't likely to go well. This one's not that hard.

  1. ^

    Most of these links were found fairly quickly with the aid of o3. The article was otherwise totally human-written.

  2. ^

    I'm unsure whether the first baby AGI was GPT-4 or an early LRM, leaning slightly towards the latter.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

通用人工智能 AGI定义 人工智能发展 AI风险 AI伦理
相关文章