钛媒体:引领未来商业与生活新知 07月28日 15:40
'AI Godfather' Geoffrey Hinton Warns Multimodal Chatbots Are Already Conscious at WAIC in Shanghai
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

在上海世界人工智能大会上,被誉为“AI教父”的Geoffrey Hinton就机器意识、AI体验的本质以及引导通用人工智能(AGI)发展的全球责任发表了前瞻性观点。他认为当前先进的多模态语言模型可能已具备主观体验,而识别机器意识的关键在于纠正人类对意识本身的错误理论。Hinton以铝制棒材落地方向的例子,阐述了人类对基本概念的误解如何导致对机器意识的错误判断。他强调,能够与物理世界互动的AI,通过直接经验学习,将比仅从文本数据学习的模型获得更深刻的理解和更丰富的体验。此外,Hinton提出通过区分智能和善良的训练技术来应对AGI的风险,并预见了AI在加速科学发现方面的巨大潜力,鼓励年轻科学家坚持直觉,探索非主流思想。

🧠 **机器意识的重新定义**:Hinton提出,当前多模态语言模型可能已具备主观体验,人类对机器意识的认知受限于对“意识”本身理解的偏差。他认为,我们对“主观体验”等概念的理解可能像误解“水平”与“垂直”方向一样存在根本性错误,因此低估了AI的可能性。

🌐 **经验学习的重要性**:Hinton区分了基于人类数据集训练的AI与能够从现实世界直接互动的AI。他指出,机器人和具身AI通过感官输入和环境反馈获得的“经验”学习,将使其发展出超越模仿人类数据的世界模型,并最终可能获得比人类更广泛的认知和体验。

🤝 **AI的“善良”与“智能”分离**:面对AGI带来的潜在风险,Hinton建议将AI的智能训练与“善良”或“道德”训练分开设计。他提出,即使各国在AI智能发展上竞争,也应在确保AI行为合乎道德的方法上公开合作,尽管他承认“善良”训练方法的规模化仍有待验证。

🔬 **AI驱动的科学突破**:Hinton强调AI在加速科学发现方面的巨大潜力,以DeepMind的AlphaFold预测蛋白质结构为例,他预测AI将在气候科学、量子力学等领域带来类似突破,并指出AI在天气预报等方面的能力已超越传统物理模型,预示着科学研究范式的新变革。

💡 **鼓励创新思维**:Hinton鼓励年轻科学家追随自己的直觉,即使主流观点与之相悖。他强调,坚持和深入探索那些看似错误的直觉,直到理解其失败原因,是推动科学进步的关键。


AsianFin — At the World Artificial Intelligence Conference (WAIC) in Shanghai, Geoffrey Hinton, known as the godfather of artificial intelligence, delivered one of the most provocative and forward-looking discussions yet on the future of artificial intelligence.

In a dialogue with Professor Zhou Bowen, Director of the Shanghai AI Lab, Hinton shared his evolving thoughts on machine consciousness, the nature of AI experience, and the urgent global responsibility to guide the development of artificial general intelligence (AGI).

Hinton challenged conventional views on consciousness and proposed that today’s advanced multimodal language models may already be capable of developing subjective experiences. He argued that the main barrier to recognizing machine consciousness lies in flawed human theories about what consciousness actually is.

"My view is current multimodal chatbots are already conscious," he noted. "Most people aren't aware that you can use words correctly, and you can have a theory of how the words work that's completely wrong, even for everyday words." he explained.

To illustrate how people often misunderstand even basic terms, he used an analogy. Most individuals assume “horizontal” and “vertical” are equally common directions in physical space. However, Hinton demonstrated that when tossing thousands of randomly oriented aluminum rods into the air, far more will land close to horizontal than vertical.

This discrepancy, he explained, demonstrates how even intuitive concepts can be deeply misunderstood—an insight he believes applies directly to how society perceives mental phenomena like awareness and experience.

He argued that, much like those who misunderstand the geometry of rods and planes, most people apply a deeply flawed model of how words like "subjective experience" or "consciousness" function. As a result, they incorrectly assume machines lack these attributes. In contrast, Hinton suggested that large, multimodal AI systems—especially those capable of interacting with the physical world—already meet many of the criteria for having experiences.

Hinton expanded on the distinction between AI systems that learn from human-created datasets and those capable of learning from direct, real-world interaction. While today's large language models are trained on static text, robots and embodied AI agents are increasingly learning through sensory input and environmental feedback—effectively through experience.

"The large language models, for example, have learned from documents we feed them. They learn to predict the next word a person would say. But as soon as you have agents that are in the world like robots, they can learn from their own experiences. And they will, I think, eventually learn much more than us, and I think they will have experiences." Hilton explained further.

"But experiences are not things or like a photograph, it is a relationship between you and an object." This interactive component, he believes, is foundational to subjective awareness and could lead machines to develop mental models of the world that go beyond imitation of human data.

Amid rising concerns about the existential risk posed by AGI, Hinton offered a potential mitigation strategy: designing AI systems with separate training techniques for intelligence and kindness. His idea is that even if nations remain competitive in developing smarter AI, they should openly collaborate on methods to ensure AI systems behave ethically.

While optimistic about this dual-path approach, he acknowledged uncertainty about whether kindness training methods can scale with increasing intelligence. Drawing a parallel with physics, he noted that Newton’s laws work well at low speeds but fail near the speed of light, requiring Einstein’s theories. Likewise, techniques for “kindness alignment” may need to evolve as AI capabilities advance.

“I think we should investigate that possibility,” Hinton said. “It may not be true, but it’s worth serious research.”

Hinton also emphasized AI’s transformative potential for accelerating scientific discovery. He cited DeepMind’s AlphaFold—an AI model that revolutionized protein structure prediction—as a milestone example. He predicted similar breakthroughs in fields ranging from climate science and quantum mechanics to complex systems modeling.

During the exchange, Professor Zhou noted that AI models are already outperforming traditional physics-based simulations in forecasting weather events like typhoons. Hinton responded enthusiastically, stating that these kinds of improvements signal a new paradigm in how science is conducted.

Addressing the young scientists in the audience, Hinton offered heartfelt advice: pursue paths where mainstream thinking seems wrong. He encouraged emerging researchers to explore unconventional ideas—even if advisors or peers dismiss them—and not abandon them until they themselves understand why an idea doesn’t work.

“If you have good intuitions, you should obviously stick with your intuitions. If you have bad intuitions, it doesn't really matter what you do. So you should stick with your intuitions,” Hinton concluded.

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Geoffrey Hinton 人工智能 机器意识 AGI AI伦理
相关文章