Published on August 7, 2025 3:21 AM GMT
This is a cross-post of John Collison's August 6, 2025 podcast interview with Dario Amodei: https://cheekypint.substack.com/p/a-cheeky-pint-with-anthropic-ceo
Key excerpt:
John Collison:
To put numbers on this, you've talked about the potential for a 10% annual economic growth powered by AI. Doesn't that mean that when we talk about AI risk, it's often harms and misuses of AI, but isn't the big AI risk that we slightly misregulated or we slowed down progress, and therefore there's just a lot of human welfare that's missed out on because you don't have enough AI?Dario Amodei:
Yeah. Well, I've had the experience where I've had family members die of diseases that were cured a few years after they die, so I truly understand the stakes of not making progress fast enough. I would say that some of the dangers of AI have the potential to significantly destabilize society or threaten humanity or civilization, and so I think we don't want to take idle chances with that level of risk.Now, I'm not at all an advocate of like, "Stop the technology. Pause the technology." I think for a number of reasons, I think it's just not possible. We have geopolitical adversaries; they're not going to not make the technology, the amount of money... I mean, if you even propose even the slightest amount of... I have, and I have many trillions of dollars of capital lined up against me for whom that's not in their interest. So, that shows the limits of what is possible and what is not.
But what I would say is that instead of thinking about slowing it down versus going at the maximum speed, are there ways that we can introduce safety, security measures, think about the economy in ways that either don't slow the technology down or only slow it down a little bit? If, instead of 10% economic growth, we could have 9% economic growth and buy insurance against all of these risks. I think that's what the trade-off actually looks like. And precisely because AI is a technology that has the potential to go so quickly, to solve so many problems, I see the greater risk as the thing could overheat, right? And so I don't want to stop the reaction, I want to focus it. That's how I think about it.
Discuss