少点错误 02月19日
Intelligence Is Jagged
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了人工智能(AI)能力发展的不均衡性,指出AI在某些领域(如编程)的进步显著,但在其他领域(如文学创作)仍有欠缺。文章认为,将AI与人类进行简单对比是不恰当的,因为AI和人类在认知能力上存在差异,各自擅长的领域也不同。文章还提到,人们常常以人类的标准来评判AI的智能程度,但这种看法可能阻碍我们理解AI的真正潜力及其带来的风险。作者用“锯齿状的智能”来形容AI发展的不平衡状态,并呼吁人们以更开放的心态看待AI的未来。

🤖 **AI能力发展不均衡:** AI在某些领域(如编程)进步显著,但在其他领域(如文学创作)仍有欠缺,呈现出一种“锯齿状”的发展态势。

🧠 **人与AI认知差异:** 人类和AI在认知能力上存在差异,各自擅长的领域也不同。例如,AI擅长快速处理JSON数据,而人类则擅长进行创新性思维。

🚫 **避免以人类标准评判AI:** 人们常常以人类的标准来评判AI的智能程度,但这种看法可能阻碍我们理解AI的真正潜力及其带来的风险。作者认为,应该避免将AI的智能简单地与人类的IQ进行比较。

🚀 **AI的未来发展:** 随着AI技术的不断发展,AI系统将在不同领域展现出新的能力。然而,AI能力的发展路径可能与人类不同,因此我们需要以更开放的心态看待AI的未来。

Published on February 19, 2025 7:08 AM GMT

In ethics, there is an argument called name the trait. It is deployed in many contexts, such as veganism—"name the trait that justifies our poor treatment of animals"—and theology—"name the trait that grants humanity dominion over the Earth"—among others. The idea is just to challenge your interlocutor to specify something, and then you would claim it as non-unique: "Well, cats have that trait too!"

It seems to me that we are in a similar situation when discussing AI capabilities, but with a different result: you can absolutely point to traits that frontier models lack to differentiate them from humans, as of now. There is no grey area here, not yet. AI models' capabilities in some domains, like coding, have clearly progressed to a point where it's getting harder even for skeptics to deny that it's feasible they could replace humans in the future. But in others, no such progress, at least not visibly; I imagine it will still be a while before I read an excellent AI-generated novel, for example, and the Overton window among my less-situationally aware friends still doesn't include claims that AI literature will ever happen. At the proverbial dinner party, I am still proverbially laughed out of the proverbial room.

Of course, I would answer that you can just as easily reverse the question and point out things that LLMs are flatly better than us at. I cannot, much as I try, think in JSON quickly enough to serve this API, for example. Tasks like that feel distinctly unrealistic for a human to do, of course, but that's my point: there are tasks where we obviously excel and tasks where the LLMs obviously excel. At present, we are complementary systems, much as humans and computers have always been.

Now, I don't (yet) believe the set of things LLMs are good at is nearly as large as the set that humans are good at, but at this stage of development there's already an interesting lack of overlap in those sets. If this weren't so, then LLMs would remain as useless for practical work as they were before GPT-3.

From Introduction to AI Safety, Ethics and Society by Dan Hendrycks:

While some would argue that an intelligence based on silicon or other materials will be unable to match one built on biological cells, we see no compelling reason to believe that particular materials are required. Such statements seem uncomfortably similar to the claims of vitalists, who argued that living beings are fundamentally different from non-living entities due to containing some non-physical components or having other special properties. Another objection is that copying a biological brain in silicon will be a huge scientific challenge. However, there is no need for researchers looking to create HLAI to create an exact copy or ''whole brain emulation''. Airplanes are able to fly but do not flap their wings like birds - nonetheless they function because their creators have understood some key underlying principles.

Indeed, we are not building human brains. Whatever vectors are in my head, I do not anticipate an AI will ever exist that shares my neural architecture exactly. We exist at some point in some optimization gradient, and the AIs are on some other gradient; or, at least, we are climbing the same one toward a general intelligence from very different directions.[1]

In the general population, a lot of self-anchoring seems to be going on. The idea of a system that can be so obviously terrible at some things we humans take for granted—and yet still be, in some meaningful sense, intelligent—is foreign to most of us. But it seems to be that we will live out the rest of our time on this planet with such systems and their descendants. My fun term for this is the jaggedness of intelligence, which is just a visual way of imagining a fact we all already know: that intelligence is far more complex than a position along an axis. It is not as though a human has a certain IQ and an LLM has another; you would need a lot more dimensions of comparison than that. If you were to visualize the peaks and valleys of our cognitive abilities, we humans are certainly jagged too. Our topography just differs immensely from the AI systems we're building.

I argue that, until a certain point of AI advancement e.g. recursive self-improvement, things are almost certain to remain this way: with visibly highly uneven progress. We are crafting a new type of cognitive structure, and we are seeing what that structure can do at various levels of its development as it builds its own psychology. This is something very different than watching a child grow up, but perhaps we are watching something similar, at least enough to draw the analogy, when we see emergent capabilities emerge emergently along the scaling curve.

As this continues, we have no reason to expect AI systems' capabilities will develop in a way that maps cleanly onto how human capabilities do, or do so in the same order, or result in the same set. All the same, the natural response from most people is to deny that we could possibly take seriously the intelligence of what we're creating—"you're telling me that you think ChatGPT is smart? It can't even do X!"—and I fear that unless and until someone sees uniformly human-level AI,[2] people will still say things like that even as the possible values of X shrink precipitously in number and perhaps reach zero.

But perhaps finding a more visceral way to guide people out of thinking that intelligence has a purely human shape, or that it would mean anything for an LLM to take an IQ test, would be of use to help people understand what's happening, where capabilities are probably headed, and what risks are involved.

  1. ^

    It just so happens that evolution and natural selection work a lot more slowly and less directionally than deep learning training runs do.

  2. ^

    By which I mean "an AI that is at least human-level at all tasks, such that there are no longer any gaps in capability relative to us."



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI能力 锯齿状智能 认知差异
相关文章