The Verge - Artificial Intelligences 2024年10月24日
Why AI companies are dropping the doomerism
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AI行业发展迅速,OpenAI和Anthropic的观点引发关注。OpenAI称将有超级智能AI,Anthropic曾强调安全,如今也积极参与竞争。文章探讨了这些现象及背后的问题。

🎯OpenAI的Sam Altman在9月发文称几千天后将有超级智能AI,能给各领域带来巨大改进,Anthropic的Dario Amodei也发表长文阐述类似观点。

💥Anthropic源于OpenAI人员对其商业方向和安全问题的担忧而创建,曾被称为AI悲观主义的中心,如今因ChatGPT引发的竞争积极参与其中。

🤔文章探讨了Anthropic态度转变的原因,以及如何衡量AGI的安全性,还提到了行业内的一些相关新闻和话题。

Illustration by Cath Virginia / The Verge | Photos by Getty Images

The latest AI manifesto, from Anthropic CEO Dario Amodei, says a lot about the industry’s current moment.

On today’s episode of Decoder, we’re going to try and figure out “digital god.” I figured we’ve been doing this long enough, let’s just get after it. Can we build an artificial intelligence so powerful that it changes the world and answers all of our questions? The AI industry has decided the answer is yes.

In September, OpenAI’s Sam Altman published a blog post claiming we’ll have superintelligent AI in “a few thousand days.” And earlier this month, Dario Amodei, the CEO of OpenAI competitor Anthropic, published a 14,000-word post laying out what exactly he thinks such a system will be capable of when it arrives, which he says could be as soon as 2026.

What’s fascinating is that the visions laid out in both posts are so similar — they both promise dramatic superintelligent AI that will bring massive improvements to work, to science and healthcare, and even to democracy and prosperity. Digital god, baby.

But while the visions are similar, the companies are, in many ways, openly opposed: Anthropic is the original OpenAI defection story. Dario and a cohort of fellow researchers left OpenAI in 2021 after becoming concerned with its increasingly commercial direction and approach to safety, and they created Anthropic to be a safer, slower AI company. And the emphasis was really on safety until recently; just last year, a major New York Times profile of the company called it the “white-hot center of A.I. doomerism.”

But the launch of ChatGPT, and the generative AI boom that followed, kicked off a colossal tech arms race, and now, Anthropic is as much in the game as anyone. It’s taken in billions in funding, mostly from Amazon, and built Claude, a chatbot and language model to rival OpenAI’s GPT-4. Now, Dario is writing long blog posts about spreading democracy with AI.

So what’s going on here? Why is the head of Anthropic suddenly talking so optimistically about AI, when he was previously known for being the safer, slower alternative to the progress-at-all-costs OpenAI? Is this just more AI hype to court investors? And if AGI is really around the corner, how are we even measuring what it means for it to be safe?

To break it all down, I brought on Verge senior AI reporter Kylie Robison to discuss what it means, what’s going on in the industry, and whether we can trust these AI leaders to tell us what they really think.

If you’d like to read more about some of the news and topics we discussed in this episode, check out the links below:

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI行业 OpenAI Anthropic 超级智能AI AI安全
相关文章