少点错误 04月05日 20:24
What does Yann LeCun think about AGI? A summary of his talk, "Mathematical Obstacles on the Way to Human-Level AI"
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文总结了Yann LeCun关于“通往人类水平AI的数学障碍”的演讲。LeCun认为,大型语言模型(LLMs)难以扩展到通用人工智能(AGI),因为它们在样本效率、数据类型和预测方式上存在根本性问题。他提出,AGI需要基于人类思维方式,结合感官输入、长期记忆、规划和推理。LeCun 提倡采用行动生成、世界建模和目标评估相结合的架构,并强调开放源码和AI作为人类智能的放大器。文章还探讨了他对当前AI系统和未来AI愿景的看法。

💡 LeCun 认为 LLMs 难以扩展到 AGI,因为它们在样本效率方面存在问题。LLMs 需要大量样本进行训练,而人类和动物的学习则高效得多。

👁️ LeCun 强调,LLMs 主要基于文本训练,而人类则通过感官输入(如视频)获取更多信息。他认为,要实现 AGI,需要训练模型处理更丰富的数据类型。

🤔 LeCun 批评 LLMs 的预测方式,认为它们预测的选项空间过于庞大,导致准确性低。他主张使用优化/搜索算法,使 AI 能够针对难题进行更长时间的努力。

🧠 LeCun 提出构建 AGI 的方法,模拟人类思维。这包括感官输入、工具辅助的长期记忆、规划和推理。他设想的架构包括行动生成、世界建模和目标评估,并强调安全性和可控性。

🌍 LeCun 认为 AI 应开源,并作为人类智能的放大器。他设想未来人与数字世界的交互将通过 AI 助手实现,并强调语言、文化和价值观的多样性。

Published on April 5, 2025 12:21 PM GMT

This is a summary of Yann LeCun's talk "Mathematical Obstacles on the Way to Human-Level AI". I've tried to make it more accessible to people who are familiar with basic AI concepts, but not the level of maths Yann presents. You can watch the original talk on YouTube.

I disagree with Yann, but I have tried to represent Yann's arguments as faithfully as possible. I think understanding people who differ in opinion to you is incredibly important for thinking properly about things.

In an appendix on my blog I include Gemini 2.5 Pro's analysis of my summary. In short:

The summary correctly identifies the core arguments, uses LeCun's terminology [...], and reflects the overall tone and conclusions of the talk

Why Yann LeCun thinks LLMs will not scale to AGI

LLMs use deep learning for base and fine-tuning, which is sample inefficient (need to see many examples before learning things). Humans and animals learn from way fewer samples.

LeCun's slide

LLMs are primarily trained on text, which doesn't carry as much raw data as other formats. To get AGI we need to train models on sensory inputs (e.g. videos). Humans see more data when you measure it in bits.

LeCun's slide

The setup for LLMs has them predict the next token. But this means they are predicting in a space with exponentially many options, of which only one is correct. This means they are almost always incorrect. And similarly for images/videos, they have so many options and the world is only partially predictable, that it's not feasible for the model to be correct.

My visualisation

LeCun's slides

AI systems work the same amount of time on short problems and hard problems. But actually they should work longer on hard problems.

LeCun's slide

How Yann LeCun thinks we should build AGI

We need to model it after how humans think

Needs to be safe and controllable by design

The specific architecture he proposes is:

AI should be released open-source, and be used as an amplifier of human intelligence.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Yann LeCun AGI LLMs AI 架构 开源
相关文章