少点错误 15小时前
The Practical Value of Flawed Models: A Response to titotal’s AI 2027 Critique
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了对AI 2027预测的批评,特别是其对AI时间线的推测。文章认为,虽然对AI 2027模型的具体分析有价值,但过度强调“稳健”规划可能导致对AI进展反应不足。作者讨论了在AI发展不确定性下,个人和治理层面的应对策略,并强调了在不同时间线预测下采取行动的重要性。文章最后指出,即使模型存在缺陷,对AI时间线的预测在思考和决策中仍具有重要意义。

🧐 AI 2027预测模型因其不合理的超指数时间范围增长曲线和模型中的其他问题而受到批评,尽管如此,它仍然是少数尝试量化AI时间线并探讨其影响的努力之一。

🤔 作者认为,在AI领域,过度强调“稳健”规划,可能导致对AI技术进步的反应不足,而大多数人对此反应甚微。

💡 文章讨论了在AI治理中“稳健”规划的难度,强调了在不同时间线预测下,个人职业选择和治理策略的差异性。作者认为,在AI治理中,快速行动可能比保守等待更重要。

⚖️ 作者指出,即使模型存在缺陷,对AI时间线的预测也具有重要意义。虽然AI 2027模型存在问题,但它仍然促进了对AI时间线和进展的思考。

Published on June 25, 2025 10:15 PM GMT

Crossposted from my Substack

@titotal recently posted an in-depth critique of AI 2027. I'm a fan of his work, and this post was, as expected, phenomenal.

Much of the critique targets the unjustified weirdness of the superexponential time horizon growth curve that underpins the AI 2027 forecast. During my own quick excursion into the Timelines simulation code, I set the probability of superexponential growth to ~0 because, yeah, it seemed pretty sus. But I didn’t catch (or write about) the full extent of its weirdness, nor did I identify a bunch of other issues titotal outlines in detail. For example:

…And more. He’s also been in communication with the AI 2027 authors, and Eli Lifland recently released an updated model that improves on some of the identified issues. Highly recommend reading the whole thing!

That said—it is phenomenal, with an asterisk. It is phenomenal in the sense of being a detailed, thoughtful, and in-depth investigation, which I greatly appreciate since AI discourse sometimes sounds like: “a-and then it’ll be super smart, like ten-thousand-times smart, and smart things can do, like, anything!!!” So the nitty-gritty analysis is a breath of fresh air.

But while titotal is appropriately uncertain about the technical timelines, he seems way more confident in his philosophical conclusions. At the end of the post, he emphatically concludes that forecasts like AI 2027 shouldn’t be popularized because, in practice, they influence people to make serious life decisions based on what amounts to a shoddy toy model stapled to a sci-fi short story (problematically disguised as rigorous, empirically-backed research).

He instead encourages plans that are “robust to extreme uncertainty” about AI and warns against taking actions that “could blow up in your face” if you're badly wrong.

On the face of it, that sounds reasonable enough. But is it?

“Robust” Planning: The People Doing Nothing

Here, it might be helpful to sort the people who update (i.e. make serious life decisions) based on AI 2027 & co. into two categories: people involved in AI safety and/or governance, and everyone else.

First, when it comes to everyone else, the real robustness failure may actually run in the opposite direction: most people are underreacting, not overreacting. In light of that, those updating in response to AI 2027 seem to be acting more robustly than the vast majority.

It’s my understanding that titotal often writes from the frame of disproving rationalists. To be clear, I think this is perfectly worthwhile. I have productively referenced his excellent piece on diamondoid bacteria in several conversations, for instance.

However, while he asserts that people ought to make plans robust to “extreme uncertainty” about how AI will go, his criticism seems disproportionately targeted at the minority of people who are, by his lights, updating too much, rather than the vast majority of people who are doing virtually nothing (and who could do something) in response to AI progress.

Here, I’d argue that “extreme uncertainty” should imply a non-negligible chance of AGI during our lifetimes (including white-collar displacement, which could be quite destabilizing). Anything less suggests to me excessive certainty that AI won’t be transformative. As Helen Toner notes, “long timelines” are now crazy short post-ChatGPT, and given current progress, arguing that human-level AI is more than a few decades out puts the burden of proof squarely on you. But even under milder assumptions (e.g. long timelines, superintelligence is impossible), it still seems implausible that the most robust plan consists of “do ~nothing.”

As I stated, I don’t precisely know what he’s referring to when he talks about people “basing life decisions” on AI 2027, but a quick search yielded the following examples:

Assuming, again, that extreme uncertainty implies a decent chance of AGI within one’s lifetime, all of these seem like reasonable hedges, even under longer timelines. None of them are irreversibly damaging if short timelines don’t materialize. Sure, I’d be worried if people were draining their life savings on apocalypse-benders or hinting at desperate resorts to violence, but that’s not what’s happening. A few years of reduced savings or delayed family planning seems like a fair hedge against the real possibility of transformative AI this century.

Even more extreme & uncommon moves—like a sharp CS undergrad dropping out to work on alignment ASAP—seem perfectly fine. Is it optimal in a long timelines world? No. But the sharp dropout isn’t dipping to do coke for two years, she’s going to SF to work at a nonprofit. She’ll be fine.

Inaction is a Bet Too

Relatedly, I might be reading too much into his specific wording, but the apparent emphasis on avoiding actions (rather than inactions) seems unjustified. He warns against doing things that could majorly backfire, but inaction is just another form of action. It’s a bet too.

The alternative to “acting on a bad model” is not “doing nothing.” It’s acting on some other implicit model—often something like: “staying in your current job is usually the right move.” That might work fine if you're deciding whether to spontaneously quit your boring day-job, but it's less useful if you’re interested in thriving during (or influencing) a potentially unprecedented technological disruption.

Acting on short timelines and being wrong can be costly, sure—embarrassing, legitimacy-eroding, even seriously career-damaging. But failing to act when short timelines are actually right? That would mean neglecting to protect yourself from (or prevent) mass unemployment, CBRN risk, and other harms, leaving it in the hands of shudders civil society and the government's emergency response mechanisms.

This costly asymmetry could favor acting on short timelines.

The Difficulty of “Robust” Planning in Governance

Second, with respect to AI governance in particular (my wheelhouse), it’s not clear that plans “robust to extreme uncertainty” about AI actually exist. Or if they do, they might not be particularly good plans in any world.

I do know governance people who’ve made near-term career decisions informed in part by short timelines, but it wasn’t AI 2027 solely guiding their decision-making (as far as I know). That differs from what titotal says he’s seen (“people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI 2027”), so note that this subsection might be unrelated to what he’s referring to.

I’ve argued before that AI governance (policy in general) is time-sensitive in a way that makes robust planning really hard. What’s politically feasible in the next 2–5 years is radically different from what’s possible over 2–5 decades. And on the individual career level, the optimal paths for short vs. long timelines seem to diverge. For example, if you think AGI is a few years away, you might want to enter state-level policy or ally with the current administration ASAP. But if you think AGI is 30 years out, you might work on building career capital & expertise in industry or at a federal institution, or focus on high-level research that pays off later.

It seems hard to cleanly hedge here, and trying to do so likely means being pretty suboptimal in both worlds.

Corner Solutions & Why Flawed Models Still Matter

Earlier, I mentioned the asymmetry between short and long timelines that could favor acting on the former. Nonetheless, there is a potentially strong case to be made that in governance, the cost of false positives (overreacting to short timelines) is higher than the cost of false negatives. But it’s not obviously true. It depends on the real-world payoffs.

Since neither short- nor long-timeline governance efforts are anywhere close to saturation, neglected as they are, optimal resource allocation depends strongly on the relative payoffs across different futures—and the relative likelihoods of those futures. If the legitimacy hit from being wrong is low, the payoff from early action is high, and/or short timelines are thought to be somewhat likely, then acting on short timelines could be a much better move.

Basically, it seems like we’re dealing with something close to a “corner solution”: a world where the best plan under short timelines is quite different from the best plan under long ones, and there’s no clean, risk-averse middle path. In such cases, even rough evidence about the relative payoffs between short- and long-timeline strategies, for instance, becomes highly decision-relevant. At times, that might be enough to justify significant updates, even if the models guiding those updates are imperfect.

Conclusion: Forecasting’s Catch-22

Titotal’s specific criticisms of AI 2027 are sharp and well-warranted. But the forecast is also one of the only serious attempts to actually formalize a distribution over AI timelines and think about downstream implications. I know it’s been immensely useful for my own thinking about timelines and progress, even though I also disagree with its conclusions (and specifics).

Titotal writes that in physics, models are only taken seriously when they have “strong conceptual justifications” and/or “empirical validation with existing data.” But AI forecasting is never going to approach the rigor of physics. And if short timelines are plausible, we barely have time to try.

In fact, there’s a bit of a catch-22 here:

So you end up with this weird tradeoff: spend time improving the epistemics and delay impact, or push for action earlier but with lower certainty about actual impact. There’s no obvious answer here.

In this context, even flawed forecasts like AI 2027 can support reasonable, defensible action, particularly in governance. Under genuine uncertainty, “robust” planning may not exist, inaction is just another risky bet, and flawed models may be the best we’ve got.

In light of this, I’d be very interested in seeing titotal try to “read the tea leaves of the AI future”—I think it’d be well worth everyone’s time.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 2027 AI预测 时间线 治理
相关文章