少点错误 2024年09月13日
RREAL AGI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文作者针对目前AGI定义的混乱,提出了一个新的概念“RREALAGI”,即 Reasoning, Reflective Entities with Autonomy and Learning。作者认为,真正的AGI不仅需要具备强大的语言能力,还需要具备推理、反思、自主学习等人类认知能力。文章详细阐述了RREALAGI的各个组成部分,并解释了它们的重要性以及如何实现。作者认为,随着AI技术的发展,我们很可能很快就会看到RREALAGI的出现,并需要认真思考其带来的挑战和机遇。

🤔 **推理能力:**RREALAGI具备“系统2”推理能力,能够权衡“思考时间”与“准确性”,通过整合认知信息以提升结果。类似人类,RREALAGI能够通过整合认知信息以提升结果。这一能力可以通过多种方式实现,例如,通过增强学习算法来训练模型,使其能够在面对复杂问题时进行更深入的思考,并做出更准确的判断。

🪞 **反思能力:**RREALAGI能够反思自身的认知过程,这对于组织和改进认知至关重要。RREALAGI能够反思自身认知过程,这对于组织和改进认知至关重要。反思能力可以帮助RREALAGI识别自身的局限性,并不断改进自身的学习和推理能力。例如,RREALAGI可以分析自身在解决问题时所犯的错误,并根据这些错误来调整自身的学习策略。

🤖 **自主性:**RREALAGI能够独立行动,无需人类的直接指导。RREALAGI能够独立行动,无需人类的直接指导。自主性可以使RREALAGI更加高效地完成任务,并能够适应不断变化的环境。例如,RREALAGI可以根据环境的变化,自主地调整自己的行动策略,以达到最佳效果。

📚 **学习能力:**RREALAGI能够持续学习,并从经验中不断改进。RREALAGI能够持续学习,并从经验中不断改进。学习能力是RREALAGI的核心能力之一,它可以使RREALAGI不断地提升自身的能力。例如,RREALAGI可以通过阅读大量的文本数据,来不断地学习新的知识和技能。

🎯 **完整性:**RREALAGI整合了以上所有能力,使其成为一个功能完整、目标导向的实体。RREALAGI整合了以上所有能力,使其成为一个功能完整、目标导向的实体。完整的RREALAGI可以更好地理解和解决问题,并能够更有效地与人类进行互动。例如,RREALAGI可以利用其推理能力、反思能力、自主性和学习能力,来解决复杂的科学问题,并为人类提供有价值的帮助。

Published on September 13, 2024 2:13 PM GMT

I see "AGI" used for everything from existing LLMs to superintelligence, and massive resulting confusion and illusory disagreements. I finally thought of a term I like for what I mean by AGI. It's an acronym that's also somewhat intuitive without reading the definition:

Reasoning, Reflective Entities with Autonomy and Learning

might be called "(R)REAL AGI" or "real AGI".  See below for further definitions.

Hoping for AI to remain hobbled by being cognitively incomplete looks like wishful thinking to me. Nor can we be sure that these improvements and the resulting dangers won't happen soon.

I think there are good reasons to expect that we get such "real AGI" very soon after we have useful AI.  After 2+ decades of studying how the brain performs complex cognition, I'm pretty sure that our cognitive abilities are the result of multiple brain subsystems and cognitive capacities working synergistically.  A similar approach is likely to advance AI.

Adding these other cognitive capacities creates language model agents/cognitive architectures (LMCAs). Adding each of these seems relatively easy (compared to developing language models) and almost guaranteed to add useful (but dangerous) capabilities. 

More on this and expanded arguments and definitions in an upcoming post; this is primarily a reference for this definition of AGI.

 

You could drop one of the Rs or aggregate them if you wanted a nicer acronym.

The above capacities are often synergistic, in that having each makes others work better. For instance, a "real AGI" can Learn important results of its time-consuming Reasoning, and can Reason more efficiently using Learned strategies. The different types of Learning are synergistic with each other, etc. More on some potential synergies in Capabilities and alignment of LLM cognitive architectures; the logic applies to other multi-capacity AI systems as well.

I like two other possible terms for the same definition of AGI: "full AGI" for artificial fully general intelligence; or "parahuman AGI" to imply having all the same cognitive capacities as humans, and working with humans.

This definition is highly similar to Steve Byrnes' in "Artificial General Intelligence”: an extremely brief FAQ, although his explanation is different enough to be complementary. It does not specify all of the same cognitive abilities, and provides different intuition pumps. Something like this conception of advanced AI appears to be common in most treatments of aligning superintelligence, but not in prosaic alignment work.

More on the definition and arguments for the inclusion of each of those cognitive capacities will be included in a future post, linked here when it's done. I wanted to get this out and have a succinct definition. Questions and critiques of the definitions and claims here will make that a better post.

All feedback is welcome. If anyone's got better terms, I'd love to adopt them.

 

  1. ^

    Types of continuous learning in humans (and language model cognitive architecture (LMCA) equivalents):

      Working memory 
        (LMCAs have the context window)
      Semantic memory/habit learning 
        (Model weight updates from experience)
      Episodic memory for important snapshots of cognition
        (Vector-based text memory is this but poorly implemented)
      Dopamine-based RL using a powerful critic 
        (self-supervised RLAIF and/or RLHF during deployment)


Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AGI RREALAGI 人工智能 认知能力 推理 反思 自主学习
相关文章