少点错误 07月16日 20:55
On being sort of back and sort of new here
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

作者时隔多年重返Less Wrong社区,分享了其对AI风险的看法和经历。文章探讨了作者从最初对AI风险的质疑,到后来逐渐关注并从事AI风险缓解工作的转变。作者反思了社区在AI问题上的观点,认为早期对AI风险的讨论存在局限性,并强调了企业在AI风险中的作用。同时,作者也表达了对LLM可能产生意识的担忧,并期待与社区成员再次展开讨论,共同探索AI相关议题。

🤔 作者曾对Less Wrong社区关于AI风险的观点持质疑态度,认为社区在早期对AI风险的讨论存在局限性,侧重于对AI程序可能行为的担忧,而忽视了人们如何使用这些程序所带来的风险。

💡 作者认为,企业是AI风险的真正驱动因素,而早期对AI风险的讨论由于担心涉及“政治”而未能充分关注企业在其中的作用。作者认为,将AI的开发和应用主要建立在非营利组织上的假设是天真的。

🔄 作者承认,尽管最初对社区的观点持怀疑态度,但社区的讨论促使他开始认真思考AI风险,并最终从事相关工作。他认为,与社区的互动对他的认知转变产生了重要影响。

🧐 作者对LLM可能产生意识的可能性表示担忧,并期待与社区成员共同探讨这一问题。他认为,社区成员能够认真对待这一问题,并开始思考其潜在影响。

Published on July 16, 2025 12:55 PM GMT

So I'm "back" on Less Wrong, which is to say that I was surprised to find that I already had an account and had even, apparently, commented on some things. 11 years ago. Which feels like half a lifetime ago. More than half a my-adult-lifetime-so-far. A career change and a whole lot of changes in the world ago.

I've got a funny relationship to this whole community I guess. 

I've been 'adj' since forever but I've never been a rat and never, until really quite recently, had much of an interest in the core rat subjects. I'm not even a STEM person, before or after the career change. (I was in creative arts - now it's evidence-based medicine.)

I just reached the one-year anniversery of the person I met on rat-adj tumblr moving accross the ocean for me, for love. So there was always going to be an enduring mark on my life left by this community, but, I figured that would be it.

Only, I was in the splash radius of all your talk about AI. 

And I loved Frank, and I loved reading what Nostalgebraist wrote about her, even if there were parts of it I didn't fully understand.

And when one couldn't get away from AI Discourse on tumblr, I found concepts I learned from you guys way back when bubbling up in my vocabulary - if only, in classic -adj fashion, by virtue of how completely wrong I thought you'd all been about it all. [1]

When suddenly everyone was fucking talking about it, like at work and shit, and it became apparent both that:
a) reading a few blog posts about and around Frank made me, relatively speaking, the expert in the room, and
b) that we really needed an expert in the room that knew more than fucking that...

...I sucked it up and got serious about learning as much as I've been able to. 

It's not been easy, coming from a non-programming background - few of the resources that can teach you this stuff are pitched at someone like me[2].

Which brings us to here:

I got to know you guys by arguing with you about how wrongheaded all this AI risk shit was, and now, I am seriously concerned about, spending a fair amount of my time thinking on, and to a certain extent working professionally on the mitigation of AI risk. 

Is this me saying you were right?

Well, no. Sorry. 

I think I was right that there was a lot of wasted effort going into armchair reasoning about imaginary stuff that might in no way resemble the actual real programs that came along that we needed to worry about. I think I was right that doing all that, building a community and a shared system of concepts around that, ran the risk of getting you stuck in ways of thinking that'd turn out to be ill-suited to the actual situation when it came along, and I think that has happened to some of you, from the stuff I've read since I've come back here. (It certainly hasn't helped your ability to create approachable/accessible content on the subject!)

I think I'm right that corporations are the real (or at least the immediate) alignment problem wrt AI, and in retrospect, I think there's important stuff that we couldn't talk about back then because profit-driven corporations as a big driver of risk would have sounded too much like Politics (The Mind-Killer). I think it was always naiive to base a lot of planning on the assumption that a non-profit could be the one to clinch that first-mover advantage, and sure enough that non-profit is now two corporations.

But - you were talking about it. And maybe if I hadn't been arguing with you I wouldn't have been thinking about it. And I'm sure there can't be nothing from the armchair reasoning years that has value, to say nothing of everything you ust've come up with since then. And I know you know the actual tech, and the maths, better than I do. So I'm gonna need to start arguing with you again. You are good at making that educational.

I've also found my position on the possibility of something like consciousness arising from something LLM-like becoming more uncertain the more I talk to them. I know that if anyone is going to be able to take seriously the idea that we should perhaps be starting to think about what that would mean just in case, it's going to be you.

So yeah. Hi. Let's take some ideas seriously.

  1. ^

    My impression of the most central/loudest/commonest take from you guys on AI risk was that we needed to worry about what the programs might do, and my take was "what we need to worry about long before that is what people do with the programs." 

    Of which more later. But, just real quick, I'll be clear that I don't even necessarily mean malicious actors. The real harms we've actually seen happen, and which look poised to happen in the near future, come of, essentially, delegating stuff to LLMs that they are not ready to do unsupervised.

  2. ^

    otoh - now that I do flatter myself that I know a thing or two, it seems to me that there's a lot of really blinkered thinking going on that comes of the only people who are seriously looking into things like AI ethics and AI alignment being programmers who think like programmers. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Less Wrong AI风险 社区 认知转变
相关文章