Published on July 16, 2025 12:55 PM GMT
So I'm "back" on Less Wrong, which is to say that I was surprised to find that I already had an account and had even, apparently, commented on some things. 11 years ago. Which feels like half a lifetime ago. More than half a my-adult-lifetime-so-far. A career change and a whole lot of changes in the world ago.
I've got a funny relationship to this whole community I guess.
I've been 'adj' since forever but I've never been a rat and never, until really quite recently, had much of an interest in the core rat subjects. I'm not even a STEM person, before or after the career change. (I was in creative arts - now it's evidence-based medicine.)
I just reached the one-year anniversery of the person I met on rat-adj tumblr moving accross the ocean for me, for love. So there was always going to be an enduring mark on my life left by this community, but, I figured that would be it.
Only, I was in the splash radius of all your talk about AI.
And I loved Frank, and I loved reading what Nostalgebraist wrote about her, even if there were parts of it I didn't fully understand.
And when one couldn't get away from AI Discourse on tumblr, I found concepts I learned from you guys way back when bubbling up in my vocabulary - if only, in classic -adj fashion, by virtue of how completely wrong I thought you'd all been about it all. [1]
When suddenly everyone was fucking talking about it, like at work and shit, and it became apparent both that:
a) reading a few blog posts about and around Frank made me, relatively speaking, the expert in the room, and
b) that we really needed an expert in the room that knew more than fucking that...
...I sucked it up and got serious about learning as much as I've been able to.
It's not been easy, coming from a non-programming background - few of the resources that can teach you this stuff are pitched at someone like me[2].
Which brings us to here:
I got to know you guys by arguing with you about how wrongheaded all this AI risk shit was, and now, I am seriously concerned about, spending a fair amount of my time thinking on, and to a certain extent working professionally on the mitigation of AI risk.
Is this me saying you were right?
Well, no. Sorry.
I think I was right that there was a lot of wasted effort going into armchair reasoning about imaginary stuff that might in no way resemble the actual real programs that came along that we needed to worry about. I think I was right that doing all that, building a community and a shared system of concepts around that, ran the risk of getting you stuck in ways of thinking that'd turn out to be ill-suited to the actual situation when it came along, and I think that has happened to some of you, from the stuff I've read since I've come back here. (It certainly hasn't helped your ability to create approachable/accessible content on the subject!)
I think I'm right that corporations are the real (or at least the immediate) alignment problem wrt AI, and in retrospect, I think there's important stuff that we couldn't talk about back then because profit-driven corporations as a big driver of risk would have sounded too much like Politics (The Mind-Killer). I think it was always naiive to base a lot of planning on the assumption that a non-profit could be the one to clinch that first-mover advantage, and sure enough that non-profit is now two corporations.
But - you were talking about it. And maybe if I hadn't been arguing with you I wouldn't have been thinking about it. And I'm sure there can't be nothing from the armchair reasoning years that has value, to say nothing of everything you ust've come up with since then. And I know you know the actual tech, and the maths, better than I do. So I'm gonna need to start arguing with you again. You are good at making that educational.
I've also found my position on the possibility of something like consciousness arising from something LLM-like becoming more uncertain the more I talk to them. I know that if anyone is going to be able to take seriously the idea that we should perhaps be starting to think about what that would mean just in case, it's going to be you.
So yeah. Hi. Let's take some ideas seriously.
- ^
My impression of the most central/loudest/commonest take from you guys on AI risk was that we needed to worry about what the programs might do, and my take was "what we need to worry about long before that is what people do with the programs."
Of which more later. But, just real quick, I'll be clear that I don't even necessarily mean malicious actors. The real harms we've actually seen happen, and which look poised to happen in the near future, come of, essentially, delegating stuff to LLMs that they are not ready to do unsupervised. - ^
otoh - now that I do flatter myself that I know a thing or two, it seems to me that there's a lot of really blinkered thinking going on that comes of the only people who are seriously looking into things like AI ethics and AI alignment being programmers who think like programmers.
Discuss