Fortune | FORTUNE 08月01日 18:28
Anthropic CEO Dario Amodei escalates war of words with Jensen Huang, calling out ‘outrageous lie’ and getting emotional about father’s death
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Anthropic的CEO Dario Amodei在一次访谈中,就AI技术的风险与机遇,以及他与Nvidia CEO Jensen Huang之间的争议进行了深入阐述。Amodei否认自己是“末日论者”,并强调他的立场源于父亲的离世,这让他深切体会到解决问题的紧迫性和AI技术的巨大潜力。他认为自己比许多自称乐观主义者更能体现AI的益处,并主张通过强监管和负责任的扩展政策来引导AI发展,而非陷入“为了快速发布产品而不顾风险”的“逐底竞争”。Amodei还批评了“开源”模式,并强调了公司领导者的诚信和对人类福祉的真诚追求是AI安全发展的关键。

🎯 Dario Amodei对被贴上“末日论者”的标签感到愤怒,认为这是对他试图控制AI行业的“无稽之谈”。他强调自己对AI能力的提升持乐观态度,并认为AI是解决“超越人类尺度”的复杂问题的关键,例如生物学领域的问题。

💔 Amodei的积极立场源于其父亲因病去世的个人经历,这让他深刻理解到“解决相关问题的紧迫性”以及AI技术对人类的深远益处。他认为AI是应对气候变化、疾病等全球性挑战的必要工具,并对AI带来的美好未来充满希望。

⚖️ Amodei提倡通过强监管和“负责任的扩展政策”来引导AI发展,并主张企业之间应进行“竞优”,即在构建更安全系统方面展开竞争,而不是“逐底竞争”,即不顾风险地快速发布产品。Anthropic是首家发布此类政策的公司,旨在树立榜样。

🚫 Amodei将Nvidia和Jensen Huang倡导的“开源”模式称为“红鲱鱼”,认为当前的大语言模型本质上是不透明的,因此无法实现真正的开源开发。他认为,依赖不透明模型的开源模式可能隐藏风险,并可能阻碍AI安全性的提升。

🤝 Amodei在创办Anthropic时,是出于对竞争对手公司在安全问题上缺乏真诚和可信度的担忧。他强调,公司领导者必须是值得信赖、动机真诚的人,才能确保AI安全工作的成功,否则只会助长不良后果。

The doomers versus the optimists. The techno-optimists and the accelerationists. The Nvidia camp and the Anthropic camp. And then, of course, there’s OpenAI, which opened the Pandora’s Box of artificial intelligence in the first place.

The AI space is driven by debates about whether it’s a doomsday technology or the gateway to a world of future abundance, or even whether it’s a throwback to the dotcom bubble of the early 2000s. Anthropic CEO Dario Amodei has been outspoken about AI’s risks, even famously predicting it would wipe out half of all white-collar jobs, a much gloomier outlook than the optimism offered by OpenAI’s Sam Altman or Nvidia’s Jensen Huang in the past. But Amodei has rarely laid it all out in the way he just did on tech journalist Alex Kantrowitz’s Big Technology podcast on July 30.

In a candid and emotionally charged interview, Amodei escalated his war of words with Nvidia CEO Jensen Huang, vehemently denying accusations that he is seeking to control the AI industry and expressing profound anger at being labeled a “doomer.” Amodei’s impassioned defense was rooted in a deeply personal revelation about his father’s death, which he says fuels his urgent pursuit of beneficial AI while simultaneously driving his warnings about its risks, including his belief in strong regulation.

Amodei directly confronted the criticism, stating, “I get very angry when people call me a doomer … When someone’s like, ‘This guy’s a doomer. He wants to slow things down.'” He dismissed the notion, attributed to figures like Jensen Huang, that “Dario thinks he’s the only one who can build this safely and therefore wants to control the entire industry” as an “outrageous lie. That’s the most outrageous lie I’ve ever heard.” He insisted that he’s never said anything like that.

His strong reaction, Amodei explained, stems from a profound personal experience: his father’s death in 2006 from an illness that saw its cure rate jump from 50% to roughly 95% just three or four years later. This tragic event instilled in him a deep understanding of “the urgency of solving the relevant problems” and a powerful “humanistic sense of the benefit of this technology.” He views AI as the only means to tackle complex issues like those in biology, which he felt were “beyond human scale.” As he continued, he explained how he’s actually the one who’s really optimistic about AI, despite his own doomsday warnings about its future impact.

Who’s the real optimist?

Amodei insisted that he appreciates AI’s benefits more than those who call themselves optimists. “I feel in fact that I and Anthropic have often been able to do a better job of articulating the benefits of AI than some of the people who call themselves optimists or accelerationists,” he asserted.

In bringing up “optimist” and “accelerationist,” Amodei was referring to two camps, even movements, in Silicon Valley, with venture-capital billionaire Marc Andreessen close to the center of each. The Andreessen Horowitz co-founder has embraced both, issuing a “techno-optimist manifesto” in 2023 and often tweeting “e/acc,” short for effective accelerationism.

Both terms stretch back to roughly the mid-20th century, with techno-optimism appearing shortly after World War II and accelerationism appearing in the science-fiction of Roger Zelazny in his classic 1967 novel “Lord of Light.” As Andreessen helped popularize and mainstream these beliefs, they roughly add up to an overarching belief that technology can solve all of humanity’s problems. Amodei’s remarks to Kantrowitz revealed much in common with these beliefs, with Amodei declaring that he feels obligated to warn about the risks inherent with AI, “because we can have such a good world if we get everything right.”

Amodei claimed he’s “one of the most bullish about AI capabilities improving very fast,” saying he’s repeatedly stressed how AI progress is exponential in nature, where models rapidly improve with more compute, data, and training. This rapid advancement means issues such as national security and economic impacts are drawing very close, in his opinion. His urgency has increased because he is “concerned that the risks of AI are getting closer and closer” and he doesn’t see that the ability to handle risk isn’t keeping up with the speed of technological advance.

To mitigate these risks, Amodei champions regulations and “responsible scaling policies” and advocates for a “race to the top,” where companies compete to build safer systems, rather than a “race to the bottom,” with people and companies competing to release products as quickly as possible, without minding the risks. Anthropic was the first to publish such a responsible scaling policy, he noted, aiming to set an example and encourage others to follow suit. He openly shares Anthropic’s safety research, including interpretability work and constitutional AI, seeing them as a public good.

Amodei addressed the debate about “open source,” as championed by Nvidia and Jensen Huang. It’s a “red herring,” Amodei insisted, because large language models are fundamentally opaque, so there can be no such thing as open-source development of AI technology as currently constructed.

An Nvidia spokesperson, who provided a similar statement to Kantrowitz, told Fortune that the company supports “safe, responsible, and transparent AI.” Nvidia said thousands of startups and developers in its ecosystem and the open-source community are enhancing safety. The company then criticized Amodei’s stance calling for increased AI regulation: “Lobbying for regulatory capture against open source will only stifle innovation, make AI less safe and secure, and less democratic. That’s not a ‘race to the top’ or the way for America to win.” 

Anthropic reiterated its statement that it “stands by its recently filed public submission in support of strong and balanced export controls that help secure America’s lead in infrastructure development and ensure that the values of freedom and democracy shape the future of AI.” The company previously told Fortune in a statement that “Dario has never claimed that ‘only Anthropic’ can build safe and powerful AI. As the public record will show, Dario has advocated for a national transparency standard for AI developers (including Anthropic) so the public and policymakers are aware of the models’ capabilities and risks and can prepare accordingly.”

Kantrowitz also brought up Amodei’s departure from OpenAI to found Anthropic, years before the drama that saw Sam Altman fired by his board over ethical concerns, with several chaotic days unfolding before Altman’s return.

Amodei did not mention Altman directly, but said his decision to co-found Anthropic was spurred by a perceived lack of sincerity and trustworthiness at rival companies regarding their stated missions. He stressed that for safety efforts to succeed, “the leaders of the company … have to be trustworthy people, they have to be people whose motivations are sincere.” He continued, “if you’re working for someone whose motivations are not sincere who’s not an honest person who does not truly want to make the world better, it’s not going to work you’re just contributing to something bad.”

Amodei also expressed frustration with both extremes in the AI debate. He labeled arguments from certain “doomers” that AI cannot be built safely as “nonsense,” calling such positions “intellectually and morally unserious.” He called for more thoughtfulness, honesty, and “more people willing to go against their interest.”

For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI Dario Amodei Anthropic 技术乐观主义 AI监管
相关文章