少点错误 2024年08月16日
Critique of 'Many People Fear A.I. They Shouldn't' by David Brooks.
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文批判大卫·布鲁克斯关于AI的观点,认为其观点模糊且缺乏依据,强调应关注AI的可测试能力,而非模糊概念,并指出AI存在的风险。

🎯人类智能通过进化产生,AI由人类设计,但足够的计算能力或可模拟人脑,或在复杂环境中模拟进化。

🧐布鲁克斯观点中存在诸多模糊术语,未明确其含义,如AI‘心灵’缺乏多种主观非物质现象,而具备人类可测试能力但缺乏意识的AI仍可完成人类工作。

💡作者认为应避免关注意识等模糊概念,集中于可测试能力,如以ChatGPT-4为例说明当前AI系统已展现某些可测试的能力。

⚠️文章仅提及AI的误用风险,作者认为AI存在更严重的生存风险,且该领域专家多有表达此担忧,但文中未引用AI专家观点。

Published on August 15, 2024 6:38 PM GMT

This is my critique of David Brooks opinion piece in the New York Times.

 

Tl;dr: Brooks believes that AI will never replace human intelligence but does not describe any testable capabilities that he predicts AI will never possess.

David Brooks argues that artificial intelligence will never replace human intelligence. I believe it will. The fundamental distinction is that human intelligence emerged through evolution, while AI is being designed by humans. For AI to never match human intelligence, there would need to be a point where progress in AI becomes impossible. This would require the existence of a capability that evolution managed to develop, but that science could never replicate. Given enough computing power, why would we not be able to replicate this capability by simulating a human brain? Alternatively, we could simulate evolution inside a sufficiently complex environment. Does Brooks believe that certain functionalities can only be realized through biology? While this seems unlikely, if it were the cause, we could create biological AI. Why does Brooks believe that AI has limits that carbon based brains produced by evolution does not have? It is possible that he is referring to a more narrow definition of AI, like silicon based intelligence based on the currently popular machine learning paradigm, but the article doesn’t specify what AIs Brooks is talking about.

In fact, one of my main concerns with the article is that Brooks' arguments rely on several ambiguous terms without explaining what he means by them. For example:

The A.I. ’mind’ lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences.

Most of these terms are associated with the subjective non-material phenomena of consciousness (i.e., ‘What it is like to be something’). However, AIs that possess all the testable capabilities of humans but lack consciousness would still be able to perform human jobs. After all, you are paid for your output and not your experiences. Therefore, I believe we should avoid focusing on the nebulous concept of consciousness and instead concentrate on testable capabilities. If Brooks believes that certain capabilities require conscious experience, I would be interested to know what those capabilities are. Demonstrating such capabilities should, in that case, be enough to convince Brooks that an entity is conscious.

Take, for example, the term ‘self-awareness’. If we focus on the testable capability this term implies, I would argue that current AI systems already exhibit it. If you ask ChatGPT-4o 'What are you?', it provides an accurate answer. We assess whether elephants are self-aware by marking their body in a place they cannot see and then test whether they can identify the mark with the help of a mirror.

I suggest that Brooks supplement these ambiguous terms with concrete tests that he believes AI will never be able to pass. Additionally, it would be helpful if he could clarify why he believes science will never be able to replicate these capabilities, despite evolution having achieved them.

On a broader level, this reminds me of how people once believed that the universe revolved around Earth simply because Earth was the celestial body that mattered most to them. Just because it feels, from our human perspective, that we are special does not mean that we are. The universe is vast, and Earth occupies no significant place within it beyond being our home. Similarly, the space of potential minds and intelligences is vast. It would be very surprising if our carbon-based brains shaped by evolution occupied an insurmountable peak in this space.

In his opening paragraph, Brooks claims to acknowledge the dangers of AI, yet the only potential harm he mentions is misuse. I would argue that the most critical risks associated with AI are existential risks and we arguably have a concensus among the experts of the field that this is a serious risk. Consider the views of the four most-cited AI researchers on this topic—Hinton, Bengio, and Sutskever have all expressed significant concerns about existential risks posed by AI, while LeCun does not believe in such risks. The leaders of the top three AI labs—Altman, Amodei, and Hassabis—have also voiced concerns about existential risks. I understand that the article is intended for a liberal arts audience, but I still find it unreasonable that John Keats is quoted before any AI experts.

In summary, the article is vague and lacks the specificity needed for a thorough critique. Mostly I interpret it as Brooks finding it difficult to imagine that something so different from our human mind as an AI could ever be conscious. As a result, he concludes that there are capabilities that AI will never possess. The heading of the article is unearned as the article does not even address certain concerns voiced by experts in the field, like the existential risks posed by not being able to align Artificial General Intelligence.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 人类智能 可测试能力 生存风险 模糊术语
相关文章