Published on March 2, 2025 5:15 PM GMT
One of my takeaways from EA Global this year was that most alignment people aren't explicitly focused on LLM-based agents (LMAs)[1] as a route to takeover-capable AGI. I want to better understand this position, since I estimate this path to AGI as likely enough (maybe around 60%) to be worth specific focus and concern.
Two reasons people might not care about aligning LMAs in particular:
- Thinking this route to AGI is quite possible but that aligning LLMs mostly covers aligning LLM agentsThinking LLM-based agents are unlikely to be the first takeover-capable AGI
I'm aware of arguments/questions like Have LLMs Generated Novel Insights?, LLM Generality is a Timeline Crux, and LLMs' weakness on what Steve Byrnes calls discernment: the ability to tell their better ideas/outputs from their worse ones.[2] I'm curious if these or other ideas play a major role in your thinking.
I'm even more curious about the distribution of opinions around type 1 (aligning LLMs covers aligning LMAs) and 2 (LMAs are not a likely route to AGI) in the alignment community. [3]
Edit: Based on the comments, I think perhaps this question is too broadly stated. The better question is "what sort of LMAs do you expect to reach takeover-capable AGI?"
- ^
For these purposes I want to consider language model agents (LMAs) broadly. I mean any sort of system that uses models that are substantially trained on human language, similar to current GPTs trained primarily to predict human language use.
Agents based on language models could be systems with a lot or a little scaffolding (including but not limited to hard-coded prompts for different cognitive purposes), and other cognitive systems (including but not limited to dedicated one-shot memory systems and executive function/planning or metacognition systems). This is a large category of models, but they have important similarities for alignment purposes: LLMs generate their "thoughts", while other systems direct and modify those "thoughts", to both organizing and chaotic effect.
This of course includes multimodal foundation models that include natural language training as a major component; most current things we call LLMs are technically foundation models. I think language training is the most important bit. I suspect that language training is remarkably effective because human language is a high-effort distillation of the world's semantics; but that is another story.
- ^
I think that humans are also relatively weak at generating novel insights, generalizing, and discernment using our System 1 processing. I think that agentic scaffolding and training is likely to improve System 2 strategies and skills similar to those humans use to scrape by in those areas.
- ^
Here is my brief abstract argument for why there are no breakthroughs needed for this route to AGI, this summarizes the plan for aligning them in short timelines; and System 2 Alignment is my latest in-depth prediction on how labs will try to align them by default, and how those methods could succeed or fail.
Discuss