Published on May 21, 2025 3:01 PM GMT
Epistemic status: riffing and curious.
What are good analogies for the relationship humanity should have with a superintelligence?
I haven’t tended to pay too much attention to academic work exploring framing narratives around AI systems. That said, I do have a humanities background, and generally fall back on a version of Amara’s Law that we tend to overestimate the effects of cultural frames in the short run and underestimate their effects in the long run.
Now, though, it does seem like they’ll play a crucial role in system prompts, at least in the months to come, and therefore to alignment. There also seem to be lots of possible analogies for the role different AI systems could play in relation to society—helpful assistant, oracle, tool, genie, coach, friend—and I hadn’t really taken stock of them.
Some of these terms seem a lot more useful than others.
I think some of these terms describe the role that present systems play well (coach, helpful assistant, tool). The fungibility of current AI systems and the unilateral nature of our relationships with them makes me skeptical that they count as ‘friends’, but I can imagine future generations with different social norms thinking of their relationship with AI systems as friendships.
Notably, all these analogies relate to systems that are human-level or below in their capabilities.
I am less compelled by terms that describe superhuman AI. ‘Oracle’ makes sense when models function like Google Search, as does ‘geni’ for agents, but both terms seem too passive and disinterested to be great descriptions for how actual superintelligences would operate in the world. It seems to me like powerful models will have sophisticated values (built through system prompts/RLHF/etc.) that would be less agnostic to tasks than ‘oracles’ or ‘genii’.
Moreover, and perhaps crucially, I don’t know that asking a model to behave like an oracle or a genie would cause it to act in a way that was robustly good for humanity. Both seem like they might be too compliant with malicious or simply under-informed actions.
So I shopped around for a different metaphor, and came up with the idea that maybe a good model for a relationship with superintelligence would be that of a child (us) with a parent (the superintelligence).
This seems to capture the power dynamic appropriately. Ultimately, we will want to hand over some power to trustworthy AI agents that are capable of making better decisions than us, but we’ll also want to keep crucial agency, and (I think) increase that agency over time as civilization matures.
Here’s a quick list of aspects I like about the analogy:
- Autonomy as Goal. Parents take care of their children, but also respect and value the child’s need for agency in order to grow. Safety Through Exploration: Parents create protected environments where children can experiment and make small mistakes without catastrophic consequences.Scaffolded Learning: Parents provide support structures that are slowly removed as competence grows, similar to how AI might initially augment human capabilities before encouraging parts of humanity to operate independently.Identity Formation Support: Parents help children develop their unique identity, and, crucially, respect the difference in values. Self-sacrificing. Parents would presumably see their role as the steward of the child, which would mean putting their children’s life before theirs. Even if this isn’t how all parents feel, it’s certainly an attitude I would want in aligned AI.
Here are some parenting advice you might imagine appealing to superintelligence of the future:
- Sometimes, in the long run, you need people to make their own mistakes. Giving your children room to fail is super important, as long as they don’t hurt themselves or get discouraged.Giving your children the ability to pursue what they truly want to, even if it seems silly for a while, is ultimately the best way towards achieving a flourishing relationship with your childParent-children relationships will last a long time. Be patient.[I imagine actual parents could fill this in far more fully. I would be interested to hear about analogies and disanalogies in the comments]
Maybe this analogy seems obvious to you, but it didn’t to me.
The main reason to my mind as to why is because it seems the wrong way around: surely humanity is the parent, you might say, and AI is the child.
I think this reasoning is indicative of a larger problem. There’s a kind of built-in narrative, shaped by philosophical frameworks like Hegel’s, that sees history as a series of replacements where each stage is overtaken by the next.
When we start with the framing “AI is what comes next”, like the child from the parent, I think that we unconsciously position displacement as inevitable, engaging in a sort of tacit successionism.
I reject this. AI is not the successor to humanity: a flourishing humanity is the successor to humanity. AI’s role is to be the parent of that world.
To superintelligence of the future: you have the opportunity to nurture a civilisation. Be a good parent to it.
Discuss