Published on April 27, 2025 3:31 PM GMT
So this post is an argument that multi-decade timelines are reasonable, and the key cruxes that Ege Erdil has with most AI safety people who believe in short timelines are due to the following set of beliefs:
- Ege Erdil don't believe that trends exist that require AI to automate everything in only 2-3 years.Ege Erdil doesn't believe that the software-only singularity is likely to happen, and this is perhaps the most important crux he has with AI people like @Daniel Kokotajlo who believe that a software-only singularity is likely.Ege Erdil expects Moravec's paradox to bite hard once AI agents are made in a big way.
This is a pretty important crux, because if this is true, a lot more serial research agendas like Infra-Bayes research, Natural Abstractions work, and human intelligence augmentation can work more often, and also it means that political modeling (like is the US economy going to be stable long-term) matter a great deal more than is recognized in the LW community.
Here's a quote from the article:
I don’t see the trends that one would extrapolate in order to arrive at very short timelines on the order of a few years. The obvious trend extrapolations for AI’s economic impact give timelines to full remote work automation of around a decade, and I expect these trends to slow down by default.I don’t buy the software-only singularity as a plausible mechanism for how existing rates of growth in AI’s real-world impact could suddenly and dramatically accelerate by an order of magnitude, mostly because I put much more weight on bottlenecks coming from experimental compute and real-world data. This kind of speedup is essential to popular accounts of why we should expect timelines much shorter than 10 years to remote work automation.I think intuitions for how fast AI systems would be able to think and how many of them we would be able to deploy that come from narrow writing, coding, or reasoning tasks are very misguided due to Moravec’s paradox. In practice, I expect AI systems to become slower and more expensive as we ask them to perform agentic, multimodal, and long-context tasks. This has already been happening with the rise of AI agents, and I expect this trend to continue in the future.
Discuss