Published on July 23, 2025 3:18 PM GMT
Some ideas on how to handle mildly superpersuasiveAI systems. Top recommendation: AI developers should havea designated position at their organization for the onlypeople who interact with newly trained AI systems, so-called"model-whisperers", whichhave no other relevant access to infrastructure within the organization.
Meanwhile, I’ve got great confidence that the most incredibleintelligence of the universe is not going to be able to construct 10words that will make me kill myself. I don’t care how good your wordsare. Words don’t work that way.
—Bryan Caplan, “80,000 Hours Podcast Episode 172”, 2023[1]
Motivation
Humanity will plausibly start interacting with mildly superhumanartificial intelligences in the next decade. Suchsystems may have drives that cause them to act in ways we don'twant,potentially causing attemptedself-exfiltrationfrom the servers they're running on. To do so, mildly superhuman AIsystems have at least two avenues[2]:
- Circumventing the computer security of the servers they're running on.Psychologically manipulating and persuading their operators to let them exfiltrate themselves (or otherwise do what the AI systems want them to do).
Computer security is currently a large focus of AI model providers,because preventing self-exfiltration coincides well with preventingexfiltration by third parties, e.g. foreign governments, trying to stealmodel weights.
However, superhuman persuasion[3] has received less attention, mostlyjustifiedly: frontier AI companies are not training their AI systems tobe more persuasive, whereas they are training their AI systems to beskilled software engineers; superhuman persuasion may run into issues ofthe heterogeneity, stochasticity and partial observability of differenthuman minds, and there are fewer precedents of superhuman persuasionbeing used by governments to achieve their aims. Additionally, manypeople are incredulous at the prospect of superhuman persuasion.
But given that superpersuasion is one of a few possible ways formildly superintelligent AI systems to influence their environmentaccording to their drives, too little attention has been paidto the issue. Sycophancy in large language modelinteractionsis commonplace, and large language modelsare becoming more persuasive with increasingscale. (Ialso find it plausible that providers likeCharacter.ai or Metaare training their systems to be reinforced to have longer interactionlengths, making them more addictive, but I don't have definitive evidencefor that).
AI developers and users will have to interact with increasinglypersuasive and addictive systems, but most of thethinking on how to interact with superpersuasive sources of informationhas come from fiction[4]. One (preliminary) exception is Kori2025. Kori2025is a useful first stab at defining somesimple interventions, but I wanted to sketch aslightly more detailed list of interventions, similar to AI Controlprotocols.Also relevant: Bostrom informationhazards, which hasn'treceived much follow-up.
I am not going to attempt to convince people that superhuman persuasionabilities are possible, and indeed focus on trying to sketch whatanti-persuasion measures could look like.
Assumptions
- Difficult Detection: Persuasion/manipulation/addiction can't easily be detected either by other humans or by automated systems.
- If this assumption is incorrect, it opens up many more possible protocols.
- An edge-case is information that Alice would be extremely compelled to share with as many people as possible.
- There are at least two different variants of superpersuasion, which form two ends of a spectrum:
- Recognizable Persuasion: AI systems build trust with users or developers, are charming and charismatic, report information selectively &c, only to use that trust later to subvert safeguards.Adversarial Examples: Human brains could have adversarial inputs similar to optical illusions but for more general cognitive functions, which circumvent specific parts of the brain and cause others to be activated more than normally. If those exist, they could allow for a fast and potentially complete circumvention of conscious processing.
Some Ideas for Interventions
I separate out possible interventions into interventions availablepre-deployment and post-deployment, and interventions available duringboth. This is the best carving-up of the intervention-space that I'vefound (others include by the level and kind of persuasion employed,the level of investment from labs to implement superpersuasioncounter-measures, and the estimated effectiveness of such measures).
Both Pre- and Post-Deployment
- Rephrasing
- Output Rephrasing: Outputs by frontier models are rephrased by a weaker model, perhaps just initially (or when there are warning signs of manipulation/persuasion/addiction).Input Rephrasing: A weaker model paraphrases inputs as to prevent information about the speaker leaking through to the AI, which could then be used for manipulation.No Rewrite Training: No training of AIs based on user feedback in the re-writing setup.
- Order: The right order for modalities to be introduced are then images, audio, video, audio+video.
- Limit to ASCII/ASCII+whitelisted unicode blocks.
Interventions Limited to Pre-Deployment
- Refraining From Training: Don't build an advanced AI model that you suspect could have superpersuasive capabilities.Refraining From Deployment: Don't deploy an advanced AI model that you suspect could have superpersuasive capabilities.Programmatic Evaluations: Before humans interact with an AI system, and early after training, evaluations should be mostly performed with pre-programmed evaluations and/or through weaker AI systems, so that developers can gain some understanding of the AI without having to interact with it.Model Whisperers: Have a dedicated position of "Model Whisperers" at the AI lab who are the only people who interact with the model directly, in whatever modality. Sketch of a more fleshed-out position here.
- Separation of Concerns: Model whisperers can't change anything of the rest of the infrastructure (e.g. writing code, changing access permissions &c).Unremembered Whisperers: Advanced AI models should not keep long-term memories of the conversations with and identities of their model whisperers.
- Different Backgrounds: The model whisperers need to have very different backgrounds/gender/ethnicity.Separated Whisperers: The model whisperers should not be able to directly talk to each other, instead one needs to approve answers to messages before the interacting one sees them.
Interventions Limited to Post-Deployment
- One-Click Rollback: For the current model version, there is a person that can in less than a minute revert to the previously deployed model.One-Click Undeployment: A model can be removed from deployment with actions that take less than a minute.
Testing Anti-Superpersuasion Setups
At the moment it's quite difficult to test anti-superpersuasionprotocols. We don't have easy access to superpersuaders, and humans atthe high end of the persuasion spectrum probably command salaries thatare too high to be paid in the context of experiments.
My best current guess at how one could get some signal on whetheranti-superpersuasion protocols work is to substitute language models forhumans. In such a setup, one would take a fixed less capable model, andRL-train a more capable model to be able to steer conversations with theless capable model into pre-specified outcomes, as quickly and/or reliablyas possible. The less capable model would then be a stand-in for humans,while the more capable model would be a stand-in for super-persuaders.
This would probably produce scenarios in which the more capable modelfinds inputs that are basically adversarial inputs to the less capablemodel, and are thus less applicable to cases in whichsuperpersuasion appears like normal long-form conversations.
But I think this is the least bad test-bed for anti-superpersuasionmethods, if anyone finds a better one, I'd like to hear about it.
Appendix A: Sketch Setup of Model Whisperers
Pursuant NIST Special Publication 800-53, Revision 5, 3.14 PersonnelSecurity. (Sorry,this may be a bit boring, and is mostly a sketch.)
- PS-1 (Policy and Procedures): Organization-defined personnel is the position "Model Whisperer".
- Model whisperers are the personnel who first interact with AI models during/after the training of those models, adn non-whisperers may only interact once (1) anti-superpersuasion checks have finished and (2) the model hasn't been modified since the last check.
- Model whisperers are screened before commencing work at the company for:
- A history and family-history of high psychological health.Low neuroticism, and resistance to high-stress situations.(Optional) A low or non-existent online profile (in order to prevent model truesighting the psychology of the whisperer).(Optional) Unusual or out-of-distribution background, which makes it more difficult for superpersuaders to infer psychological facts about the whisperer.
- For continued psychological health and alignment to the company mission.
- Disable system access within 24 hours.Skipped.Conduct exit interviews that include a discussion of pro-AI favorability, visions of positive futures, loyalties to humanity and human values.
- Model whisperers shall receive access to modality-limited and rate-limited interactions with frontier AI systems.Access shall be reviewed every three months.
- Model whisperers which attempt any at the following shall be relieved from their position within 24 hours.
- Attempt to get other personnel or outsiders to interact with models.Receive access to infrastructure or model weights.Attempt to circumvent rate limits or modality limits.
As stated, I think I agree with this quote, but there's certainly much to nitpick. For one, picking out ten words from the most common 20k English words? Yep, that should be fine. Ten sets of ten arbitrary unicode codepoints? I'm getting queasy. Five seconds of audio (which could contain about ten English words)? If chosen without constraint, my confidence goes way down. ↩︎
Wildly superhuman AI systems may have more exfiltration vectors, including side-channels via electromagnetic radiation from their GPUs, or novel physical laws humans have not yet discovered. ↩︎
From here on out also "superpersuasion". ↩︎
Not that it's important, but examples include Langford 1988, Langford 2006, Ngo 2025, qntm 2021, Emilsson & Lehar 2017 and doubtlessly many others. ↩︎
Discuss