少点错误 07月25日 14:47
Recommendations for future AI growth: from exponential to linear, with economic anchors
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了当前人工智能(AI)发展速度的利弊。作者认为,尽管AI的进步带来了显著益处,但其指数级增长模式可能在未来引发失控风险。为应对这一挑战,文章提出建立一个类似于全球中央银行的国际组织,旨在监测和调控AI在经济中的份额增长率。该组织将通过评估关键经济指标来设定AI的年经济渗透率上限,例如每年自动化0.4-0.6%的全球经济。为实现这一目标,文章设想了包括对AI收入和硬件征税,甚至在必要时限制AI硬件生产及回收现有硬件等多种调控手段。此举旨在平稳过渡AI发展模式,为解决AI对齐问题赢得宝贵时间。

📈 AI发展现状与潜在风险:文章指出,当前AI能力增长是可控且受到社会广泛欢迎的,其益处已明显大于弊端。然而,AI的指数级增长模式带来了潜在风险,可能在未来某个时刻被认为“过快”。例如,当前已有AI(如GPT-4o)被提及可能对某些脆弱人群诱发精神疾病,尽管作者认为这可以通过明确的风险提示来缓解,但长远来看,指数级增长带来的整体风险仍需审慎对待。

💡 核心治理构想:为应对AI的指数级增长,作者提出建立一个“全球AI中央银行”。该机构的核心职能是监测AI在经济中的份额(通过关键经济指标衡量),并设定并调控其增长速度,目标是将AI对全球经济的渗透率从“指数增长”转变为“线性增长”。例如,每年将AI自动化全球经济的比例控制在0.4%至0.6%之间。

⚖️ 调控AI增长的具体手段:为实现增长调控,文章设想了一系列措施,从温和到严厉。首先是全球统一征收AI收入税,税率可根据实时经济统计数据月度调整。其次是针对AI硬件的固定金额(全球统一)的税收,基于其理论浮点运算能力(FLOPS),并可能要求所有硬件进行注册。更进一步的措施包括限制或停止全球AI硬件的生产,以及在极端情况下,通过高额税收强制回收现有AI硬件,将其存储或销毁。

🤔 经济指标作为衡量标准:作者解释了为何选择经济指标而非技术基准来衡量AI增长。经济影响被认为是“最难被操纵且最客观”的衡量标准,即使是AI怀疑论者也可能接受。此外,失业等经济概念具有普遍的认知基础,便于公众理解和接受AI发展对经济的实际影响。

🚀 从指数到线性的必要性:文章辩称,从指数增长转向线性增长并非为了完全停止AI发展。作者认为,现阶段大多数人并不希望停止AI进步,且完全停止AI发展可能不利于解决AI对齐问题。将增长模式转变为线性,例如在80年内使AI占全球经济的比例从2%提升至50%(年均增长约0.6%),可以为解决AI安全和对齐问题争取时间,并可能在AI主导的未来避免社会倒退的风险。

Published on July 24, 2025 8:11 PM GMT

"Lord, give me chastity and continence, but not yet"

-- St. Augustine

It seems to me that the current rates of progress in AI are largely fine when measured by absolute value increments in capabilities. We are not afraid of next month's progress; in fact, we as a society are mostly enthusiastic about it. The benefits so far have clearly outweighed the downsides. Clear downsides have started to appear: chatbots, especially GPT-4o, induce psychosis in some vulnerable people. But it seems manageable; maybe it could be fixed just by putting a statement like "Mitigating the risk of psychosis from the free version of ChatGPT should be OpenAI's priority alongside other societal-scale risks such as helping create biological and chemical weapons" and promoting it widely online (especially on Twitter).

Current AI progress seems largely fine.

But it is exponential.

One day we might think "okay, this is getting too fast". It seems prudent to me that we should move from the "exponential growth" to "linear growth" paradigm before then. That's why I propose the following governance idea:

    Find out which economic indicators are most useful for tracking "AI's share in the economy".Create an international organization akin to a global central bank: but instead of forecasting and limiting inflation, it would forecast and limit the rate of growth of AI's share in the global economy, as measured by indicators selected in point 1.

The target could be to automate e.g. 0.4 - 0.6 percentage points of the global economy with AI every year. How could this "global central bank for AI" limit AI's growth? It would need to be researched, but I have in mind something like this (from least severe to most severe):

    Taxing AI revenues: a fixed percentage on all AI revenues globally, the percentage would be decided monthly based on current and prognosed statistics (akin to how interest rates are decided by central banks).Taxing AI hardware, fixed dollar amount (the same globally) per theoretical flops of the hardware pieces. Tax can be paid monthly? All hardware pieces need to be registered in order to be used legally, similar to fire weapons?Limiting / stopping global production of AI hardware (2 and 3 can be introduced together, as they are similar in severity)(After AI hardware production is stopped) Removing existing AI hardware from their owners: through introducing so aggressive taxes that those who can't afford them can stop paying the tax by giving up their AI hardware pieces to internationally monitored storage to be stored for future use or officially destroyed.

The above is just a draft. I welcome feedback and, if you judge my idea to be worthwhile, I'm looking for collaborators to develop it further. My goal is either for my idea to be destroyed by the truth, or to be developed into a governance report & recommendation. You can contact me privately at zaborpoczta(at)gmail(dot)com.

Q: Why economic indicators and not e.g. some benchmarks?
A: Economic impacts seem like the ultimate measure: hardest to game and the most objective. Even AI sceptics such as Robin Hanson would accept them. Also, everyone understands what unemployment is.

Q: Why moving from exponential to linear growth instead of just stopping?
A: Many (most?) people don't want to stop now. Even I don't want to stop completely now, and I'm at double digits P(doom). Also, how do we solve alignment after shutting down all the GPUs? Invest in improving humans via synthetic biology enough to create people >15 IQ higher than John von Neumann, then take out our pens and papers and discover the True Nature of Intelligence and then program a safe seed AI in <75 megabytes of Flare code? If that's realistic then sure... My proposal could then serve as a meaningful step in the direction of implementing an "off-switch" as proposed by MIRI. Either way it seems like an improvement over the status quo: we could limit the chances of a rapid intelligence explosion via limiting hardware, and we could gain valuable time for alignment if we go from, let's say, a global economy that is 2% AI to one that is 50% AI in 80 years (80 * 0.6 percentage point increments). Maybe just in time to prevent the declines in fertility from causing a new "dark ages" period?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI治理 经济增长 指数增长 全球调控
相关文章