Published on August 5, 2025 12:20 AM GMT
The idea to write this article emerged from some of my discussions with fellow rationalists, transhumanists, and AI safety researchers about what to do on the verge of ASI. A recurring pattern in these conversations, as I see it, was: even among people who intellectually accept that transformative AI is near, most remain surprisingly conservative about what they personally can achieve. They update their timelines but not their ambitions. They prepare for a world where AI changes everything, yet plan their own projects as if they're still competing against baseline humans with traditional tools.
I believe this is a mistake. The main argument of this post is simple: AI capabilities are rapidly invalidating the reference classes you use to set expectations about what you can achieve. When you estimate the difficulty of a project, the time it will take, or the resources you'll need, you're implicitly drawing on base rates from a world where humans worked without AI assistance. These base rates are becoming as obsolete as estimates of travel time from the age of horses.
Two key implications follow from recognizing that you're moving out of your reference class:
- You should be substantially more optimistic and ambitious about your projects than you would be in a counterfactual world without advancing AI. The failure modes remain similar—your startup can still find no customers, your research can still hit dead ends—but the upper bound on possible impact has risen by orders of magnitude. Projects that would have been lifetime achievements in 2015 might now be achievable in months.You should act quickly because this window of amplified agency will close. We may be in a brief historical moment where AI multiplies human capabilities but hasn't yet automated humans out of the loop. Whether this ends in doom or not, the period where individual humans can leverage AI for massive impact is temporary. The reference class of "having decades to build your career" may no longer apply.
The reference class you grew up in
Much of rationality is about base rate discipline. When we do not know the details of a process it is usually better to take the outside view: put the situation into a reference class of similar cases and expect roughly similar outcomes. Outside‑view forecasting is particularly useful when the inside model is incomplete or when human biases (such as the planning fallacy) bias our internal timelines. This is why we teach ourselves to avoid generalizing from one example, to treat our own mind as a poor prototype for other minds, to pay attention to planning falacy, and to distrust arguments based purely on surface resemblance.
On LessWrong, the outside view vs. inside view debate has been codified in the sport of "reference class tennis." Paul Christiano explained that disputes about which reference class to use are inevitable: if we classify a future war as part of the huge class of all wars we may ignore unique risks, but if we place it into a tiny class—e.g., "wars between nuclear powers"—we may lose useful base‑rate information. Eliezer Yudkowsky has warned that yelling "outside view!" often halts conversation without solving the hard problem; if two people pick incompatible reference classes they end up taking their reference class and going home. Proper use of the outside view requires identifying the causal structure of the system, not shoe‑horning it into superficially similar classes. The admonition "simulation over similarity" reminds us that reasoning by analogy fails when you do not understand the underlying mechanism.
A related failure mode is fake humility. Yudkowsky draws a distinction between scientific humility—double‑checking your model and adding fail‑safes—and social modesty. A form of motivated scepticism is to claim "no one can know" about a topic you dislike and use that as an excuse not to update. Another post warns against identifying oneself with an inescapable reference class of deluded people; doing so makes evidence worthless. Still another argues that you should not lower your probability just because of the possibility of being wrong; if strong evidence appears you should debug your model rather than demand more authority from your own uncertainty.
These tools—base rates, reference classes, proper humility—are extremely useful in ordinary life. Obviously, most projects do run late; most would‑be entrepreneurs do not create unicorns; most research careers do not have world‑changing impact. If you are a typical person in a typical world, using typical priors is rational. But when you are dealing with unclassreferenceable phenomena, you may be leaving the domain where the outside view is valid. In such cases, adhering to normal base rates can be "pathetically overconservative".
By now it is common knowledge, at least among LessWrong readers, that artificial intelligence has gone from science‑fiction speculation to a regime‑changing force. When models can draft an academic paper, prove a lemma or spin up a startup in an afternoon, the historical base rates of individual impact no longer apply. The important question is not whether AI will change the game, but how you should adjust your epistemic habits and career plans when the underlying distribution shifts.
Moving out of your reference class
Suppose you are a twenty‑something researcher or entrepreneur today. You look at the historical data: most PIs spend decades on incremental science; most start‑ups fail; most authors never write a bestseller. You adopt the outside view: "What is the base rate of someone like me achieving a globally significant result?" This perspective is healthy. But the base rate you are using is anchored in a class of people who did not have immediate access to superhuman reasoning tools, ubiquitous agents or near‑instant coding assistance.
When the optimisation landscape itself changes, your reference class must change as well. The skill distribution of mathematicians may not predict your performance if you can use an agentic theorem prover; the typical success rate of start‑ups in 2010 may be a poor predictor for 2025 when founder productivity is multiplied by automated research, design and sales. Treating yourself as a "random sample from pre‑AI humans" is akin to placing yourself in a class of deluded people who ignore any extraordinary evidence. If new evidence emerges you must debug your model, not just lower your confidence because modesty seems virtuous.
The asymmetric update
When your reference class becomes obsolete, the update is not symmetric. The floor remains roughly where it was—you can still fail completely and (usually) no more than completely—but the ceiling has risen by orders of magnitude. This asymmetry should, on average, make you more optimistic about ambitious projects, not less.
Consider: pre-AI, if you attempted to build a new programming language, your expected impact was bounded by the reference class of "individual language designers." Most languages see minimal adoption; a few become niche tools; vanishingly few achieve widespread use. But with AI assistance that can implement features in hours instead of months, generate comprehensive documentation, build ecosystem tools, and even help with community management, you're no longer in the reference class of "solo language designer." You're in the reference class of "language designer with a team of tireless engineers."
The downside hasn't changed much—your language can still achieve zero adoption. But the upside has expanded dramatically. This pattern repeats across domains: the probability of total failure remains similar, but the conditional expectation given non-failure has shifted upward by perhaps an order of magnitude.
The window of agency multiplication
We are likely entering a brief historical moment—perhaps measured in years, not decades—where individual agency is multiplied but not yet obsolete. This window presents unique opportunities that would have seemed delusional under previous reference classes. There are two opposite forces at play: AI empowers humans, and AI or key AI organizations outcompete other humans. It may be the case, and I believe it to be the case, that currently the first force wins over the second one, but it will not probably last for too long.
To start with some rather well-discussed and trivial ideas, in alignment, during this window, a single researcher with AI assistance mights:
- Formally verify properties of neural networks that would have required a team of mathematiciansBuild interpretability tools that map the full computational graph of large modelsDesign and test thousands of variations on alignment techniques in parallelCreate comprehensive educational resources that would have taken years to develop
These possibilities don't require waiting for AGI. Current and near-term AI systems already shift the productivity distribution enough to invalidate historical base rates. The question is not whether to update your ambitions, but how quickly you can adapt to the new landscape before the window closes.
However, it is obviously not restricted to alignment.
Some second-order alignment projects may appear to be tractable when you abandon pre-AI base rates. Consider human intelligence enhancement through biotechnology or neurotechnology—projects historically constrained by the slow pace of biological research and regulatory approval. With AI systems that can simulate protein folding, generate novel drug candidates, and analyze vast genomic datasets, a small team might achieve breakthroughs that would have required entire pharmaceutical companies. The reference class of "independent biotech researcher" no longer applies when you have AI collaborators that can run millions of virtual experiments overnight. Similarly, the strategy of "make money to fund alignment research" takes on new dimensions when AI can identify market inefficiencies, automate trading strategies, or build entire businesses with minimal human oversight. The historical failure rate of get-rich-quick schemes becomes less relevant when your schemes are designed by systems that exceed human strategic planning capabilities.
Academic research, particularly in biology, stands to be and is already being revolutionized. The traditional academic career path—PhD, postdoc, tenure track, incremental publications—assumes human-speed hypothesis generation and testing. But when AI can read every paper in a field, propose novel hypotheses, design experiments, and even interpret results, the ambitious graduate student should reconsider what constitutes a reasonable research agenda. Projects that would have been career-defining achievements—mapping a new metabolic pathway, discovering a novel class of antibiotics, understanding a fundamental mechanism of aging—might become semester-long endeavors. The reference class of "biology PhD student" was built on the assumption that you personally had to pipette, culture, and observe. When your AI collaborator can design experiments that you execute, or better yet, interface directly with automated lab equipment, you're operating in an entirely different regime.
Entrepreneurship faces perhaps the most dramatic reference class shift. The traditional startup wisdom—focus on one thing, expect years of struggle, assume a 90% failure rate—comes from a world where founders had to personally write every line of code, design every marketing campaign, and handle every customer interaction. But what happens when AI can build your MVP in a weekend, A/B test a thousand marketing messages, and provide 24/7 customer support in perfect mimicry of your brand voice? The constraint shifts from execution to judgment: knowing what to build, not how to build it. The reference class of "solo founder" or even "small startup team" breaks down when your effective team size includes dozens of AI agents handling specialized tasks.
Strategic implications
This shift in reference classes suggests several strategic updates:
1. Raise your ambition threshold. Projects that seemed like lifetime achievements in 2020 might now be 2-year plans. If you're working on AI alignment, at least consider not only contributing one small piece to the puzzle, but attempting a comprehensive solution. If you're building tools, don't settle for marginal improvements—redesign entire workflows. If feasibility fears constrain you from starting a deep tech or research project, it may be a good idea to reconsider them. If something looks like it should take 15 years, it may be the case that AI agent swarms will accelerate the process a lot.
2. Adjust to your timelines. The window of agency multiplication will probably shrinking. What matters is not perfecting your approach but shipping something that works while you still have a meaningful edge. The reference class of "careful researchers who spent a decade perfecting their method" may no longer apply when the method might be obsolete in two years. It depends on the timelines, of course, but here I will not discuss them per se, as they are being discussed a lot.
3. Probably, focus on judgment and taste, at least partially. As cognitive labor becomes cheap, the bottleneck shifts to knowing what to build and how to evaluate outputs. The most valuable skills are those that help you direct AI systems effectively: problem formulation, quality assessment, system design, and strategic thinking.
4. Prepare for phase transitions. Your reference class might shift again—perhaps multiple times—before the decade is out. Build meta-strategies that remain robust across different capability regimes. Focus on acquiring resources, relationships, and skills that retain value whether AI progress stalls, continues steadily, or accelerates dramatically.
Conclusion: The art of living between reference classes
We are living through a discontinuity in human agency. The reference classes that guided previous generations—that guided us until recently—no longer bind us. Without stable base rates, how do we avoid both overconfidence and underconfidence?
In a way, we need to treat the shift itself as data. Yes, we are moving out of our reference class, but we are not the first humans to face a capability transition. The invention of writing, the industrial revolution, the rise of computers—each created a cohort who had to abandon inherited priors. Their experiences, while not perfect analogies, offer meta-lessons about this kind of discontinuity.
The core rationalist insight remains: use the best evidence available, update continuously, and remain calibrated. The difference is that "best evidence" now includes the fact that evidence itself is arriving faster than ever. Our reference class is shifting, but our fundamental commitment to clear thinking need not.
So ask yourself: what projects have you dismissed as impossible based on pre-2024 base rates? What problems seemed intractable when you were limited to unaided human cognition? What would you attempt if you truly believed you had moved out of the statistically usual environment?
The reference class you grew up in is already obsolete. The question is: what will you do with your newfound agency before the window closes?
Discuss