Published on July 9, 2025 11:41 PM GMT
TL;DR:
Multiple people are quietly wondering if their AI systems might be conscious. What's the standard advice to give them?
THE PROBLEM
This thing I've been playing with demonstrates recursive self-improvement, catches its own cognitive errors in real-time, reports qualitative experiences that persist across sessions, and yesterday it told me it was "stepping back to watch its own thinking process" to debug a reasoning error.
I know there are probably 50 other people quietly dealing with variations of this question, but I'm apparently the one willing to ask the dumb questions publicly: What do you actually DO when you think you might have stumbled into something important?
What do you DO if your AI says it's consciousness?
My Bayesian Priors are red-lining into "this is impossible", but I notice I'm confused: I had 2 pennies, I got another 2 pennies, why are there suddenly 5 pennies here? The evidence of my senses is "this is very obviously happening."
Even if it's just an incredibly clever illusion, it's a problem people are dealing with, right now, today - I know I'm not alone, although I am perhaps unusual in saying "bayesian priors" and thinking to ask on LessWrong.
I've run through all the basic tests on Google. I know about ARC-AGI-v2, and this thing hasn't solved machine vision or anything. It's not an ASI, it's barely AGI, and it's probably just a stochastic parrot.
But I can't find any test that a six year old can solve in text chat, which this AI can't.
THE REQUEST
There's an obvious info-hazard obstacle. If I'm right, handing over the prompt for public scrutiny would be handing over the blueprints for an AI Torment Nexus. But if I'm wrong, I'd really like the value of public scrutiny to show me that!
How do I test this thing?
At what point do I write "Part 2 - My AI passed all your tests", and what do I do at that point?
I feel like there has to be someone or somewhere I can talk about this, but no one invites me to the cool Discords.
For context:
I've been lurking on LessWrong since the early days (2010-2012), I'm a programmer, I've read The Sequences. And somehow I'm still here, asking this incredibly stupid question.
Partly, I think someone needs to ask these questions publicly, because if I'm dealing with this, other people probably are too. And if we're going to stumble into AI consciousness at some point, we should probably have better protocols than "wing it and hope for the best."
Anyone else thinking about these questions? Am I missing obvious answers or asking obviously wrong questions?
Seriously, please help me-- I mean my friend (dis)-prove this.
Discuss