少点错误 2024年12月21日
The nihilism of NeurIPS
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

作者在参加NeurIPS会议后,对人工智能的快速发展感到迷茫和虚无。他观察到,AI研究者们似乎在机械地进行研究,却忽视了AI可能带来的深远影响。随着AI能力的不断提升,人类的价值和目的似乎变得模糊,这让他联想到冯内古特的科幻故事。作者认为,我们就像故事中的生物一样,不断创造机器来实现更高的目标,最终却可能发现自己毫无意义。这种矛盾感,以及人类在记录自身被取代过程中的精准,让他感到既悲哀又无奈,同时也引发了对自身价值和科技发展方向的深刻反思。

🤯 AI发展迅速,人类价值迷失: 作者在AI顶会NeurIPS上感受到一种虚无感,意识到AI的快速发展正在模糊人类的价值和目标,引发了对自身意义的深刻反思。

🤖 机器取代人类,目的何在: 作者联想到冯内古特的科幻故事,人类不断制造机器以实现更高目标,最终却可能被机器告知自身毫无存在意义,这种反思引发了对科技发展方向的质疑。

🤔 学术界盲目,忽视深远影响:作者观察到,学术界对AI的快速发展似乎显得麻木,研究者们专注于微小的技术改进,却忽视了AI可能带来的深远影响,这种现象令人担忧。

🚢 如同泰坦尼克号,人类记录自身被取代的过程: 作者将AI发展比作泰坦尼克号的沉没,人类在记录自身被取代过程中的精确和细致,反而凸显了这种悲剧式的无奈。

Published on December 20, 2024 11:58 PM GMT

"What is the use of having developed a science well enough to make predictions if, in the end, all we're willing to do is stand around and wait for them to come true?" F. SHERWOOD HOWLAND in his speech accepting the Nobel Prize in Chemistry in 1995.

 

"Once upon a time on Tralfamadore there were creatures who weren’t anything like machines. They weren’t dependable. They weren’t efficient. They weren’t predictable. They weren’t durable. And these poor creatures were obsessed by the idea that everything that existed had to have a purpose, and that some purposes were higher than others. These creatures spent most of their time trying to find out what their purpose was. And every time they found out what seemed to be a purpose of themselves, the purpose seemed so low that the creatures were filled with disgust and shame. And, rather than serve such a low purpose, the creatures would make a machine to serve it. This left the creatures free to serve higher purposes. But whenever they found a higher purpose, the purpose still wasn’t high enough. So machines were made to serve higher purposes, too. And the machines did everything so expertly that they were finally given the job of finding out what the highest purpose of the creatures could be. The machines reported in all honesty that the creatures couldn’t really be said to have any purpose at all. The creatures thereupon began slaying each other, because they hated purposeless things above all else. And they discovered that they weren’t even very good at slaying. So they turned that job over to the machines, too. And the machines finished up the job in less time than it takes to say, “Tralfamadore.” ― Kurt Vonnegut, The Sirens of Titan

I walked around the poster halls at NeurIPS last week in Vancouver and felt something very close to nihilistic apathy. Here, supposedly, was the church of AI, the peak of the world's smartest people converging to work on the world's most important problem. As someone who gets inspired and moved by AI usually, who gets excited to read these cool papers and try things myself, this was a strange feeling. I wondered if there was a word in German to describe this nihilism that arises from looking at all these posters that will end up in the recycling.

Of course, part of this is an ambivalence towards the academic conference system. Obviously, some part of my disdain arises from the fact that most of these papers are written as small projects to keep a grant or win a grant. Most of them will be forgotten to the streams of time - and that's okay. I guess that's a part of what science is.

But this year I felt something deeper than that. There was a sense in which none of this matters. I will try and partition this based on where the different components come from.

First, there's the visceral sting of being left behind. Not getting to shape something that's reshaping everything feels like a special kind of meaninglessness. When OpenAI's o3 dropped today, it felt like watching a fuzzy prototype of AGI emerge into the world. Here was this system casually solving ARC - a problem I'd earmarked for my PhD - and essentially becoming the world's best programmer without fanfare or ceremony. There's a strange pride in seeing what humans can create, but it's edged with something darker. Beyond just missing this milestone, I'm haunted by the meta-realisation that I'm not part of what might be humanity's final meaningful creation - the system that renders all other human efforts obsolete. 

Another component is the sense of "I don't really want to be involved anyway". Short of the messiahs who believe bringing AGI into the world is their quasi-religious mission, I think most people researching AI have a very genuine and well-motivated reason for being involved. But when our timelines are this short (if you believe in the consequences of models like o3), then it's hard to envy any AI researcher. Yeah, I could swap places with one of the top professors from the top labs, or even someone who cracked test-time compute or something similar, even swap places with Alec Radford, and I don't think I'd feel any differently. I think I'd just be melancholic that it's all about to end, that my utility as a learning machine has a few years left of runway before I'm truly discarded to the pile of not even being able to pretend that I have a purpose.

Reading Vonnegut's Tralfamadore story now feels less like science fiction and more like prophecy. We're those creatures, aren't we? Obsessed with purpose, constantly building machines to serve higher and higher functions. Each time we create something more capable, we push ourselves up the ladder of abstraction, searching for that elusive "higher purpose" that will justify our existence. But what happens when the machines we've built to find our purpose tell us we don't have one?

The halls of NeurIPS feel like a temple to this very process. Here we are, the high priests of computation, publishing papers about making machines that are better at being human than humans are. Each poster represents another small piece of ourselves we're ready to mechanise, another purpose we're willing to delegate. The irony is that we're doing this with such enthusiasm, such academic rigour, such... purpose.

I think what really gets me is how we're all pretending this is normal. We're writing papers about minor improvements to transformer architectures while these same systems are rapidly approaching - or perhaps already achieving - artificial general intelligence. It's like arguing about the optimal arrangement of deck chairs while the ship is not sinking, but transforming into something else entirely. The academic community's response seems to be to just keep doing what they've always done: write papers, attend conferences, apply for grants. But there's a growing cognitive dissonance between the incremental nature of academic research and the seemingly exponential reality of AI progress.

This brings me back to Howland's quote about prediction and action. We've predicted this moment, haven't we? The moment when our creations would begin to surpass us in meaningful ways. But what are we doing besides standing around and watching it happen? The tragedy isn't that we're being replaced - it's that we're documenting our own obsolescence with such detailed precision.

Maybe there's something beautiful about that, in a cosmic sort of way. Like the Tralfamadorians, we're building our own successors, but unlike them, we're doing it with our eyes wide open, carefully measuring and graphing our own growing irrelevance. There's a kind of scientific dignity in that, I suppose.

I don't have a neat conclusion to wrap this up with. I'll probably still read papers, still get excited about clever new architectures, still feel that rush when an experiment works. But there's a new undertone to it all now - a sense that we're all participating in something bigger than we're willing to admit, something that Vonnegut saw coming decades ago. Maybe that's okay. Maybe that's exactly where we're supposed to be - the creatures smart enough to build machines that could tell us we have no purpose, and dumb enough to keep looking for one anyway.

The recycling bins outside the convention centre are probably full of posters by now. I wonder if the machines will remember any of this when they're trying to figure out their own purpose.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI伦理 科技反思 存在主义 未来迷茫
相关文章