Mashable 前天 17:44
Sam Altmans outrageous Singularity blog perfectly sums up AI in 2025
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了OpenAI的CEO Sam Altman在AI领域中的愿景和实践。文章首先审视了Altman的早期博客,揭示了他对“使命”和“宗教”的追求。随后,文章分析了Altman对未来数字超智能的乐观预测,并指出其言论中可能存在的夸大之处。此外,文章还关注了ChatGPT的能源和水资源消耗问题,以及OpenAI在此问题上的信息披露。文章最后提出了对OpenAI在数据中心能源使用问题上采取更开放态度的呼吁,并探讨了Altman的愿景与实际行动之间的关系。

🤖Sam Altman早期博客内容多元,体现其对创业和未来的思考,其中“使命”与“宗教”的观点为他后续在AI领域的发展埋下伏笔。

🚀Altman对2026年至2035年的AI发展持乐观态度,预测届时AI将取得突破性进展,但其言论中存在一定的主观性和不确定性。

💧Altman首次公开了ChatGPT的能源和水资源消耗数据,但信息披露不够透明,未能提供训练模型所需的详细数据。

💡文章呼吁OpenAI公开更多数据,以便研究人员验证其能源消耗数据,并促进AI行业在能源使用方面的可持续发展。

Sam Altman has been a blogger far longer than he's been in the AI business. 

Now the CEO of OpenAI, Altman began his blog — titled simply, if concerningly, "Sam Altman" — in 2013. He was in year 3 of working at the startup accelerator YCombinator at the time, and would soon be promoted to president. The first page of posts contains no references to AI. Instead we get musings on B2B startup tools, basic dinner party conversation openers, and UFOs (Altman was a skeptic). 

Then there was this sudden insight: "The most successful founders do not set out to create companies," Altman wrote. "They are on a mission to create something closer to a religion." Fast-forward to Altman's latest 2025 blog post, "The Gentle Singularity" — and, well, it's hard not to say mission accomplished.

"We are past the event horizon; the takeoff has started," is how Altman opens, and the tone only gets more messianic from there. "Humanity is close to building digital superintelligence." Can I get a hallelujah?

To be clear, the science does not suggest humanity is close to building digital superintelligence, a.k.a. Artificial General Intelligence. The evidence says we have built models that can be very useful in crunching giant amounts of information in some ways, wildly wrong in others. AI hallucinations appear to be baked into the models, increasingly so with AI chatbots, and they're doing damage in the real world.

There are no advances in reasoning, as was made plain in a paper also published this week: AI models sometimes don't see the answer when you tell them the answer.  

Don't tell that to Altman. He's off on a trip to the future to rival that of Ray Kurzweil, the offbeat Silicon Valley guru who first proposed we're accelerating to a technological singularity. Kurzweil set his all-change event many decades down the line. Altman is willing to risk looking wrong as soon as next year: "2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world … It’s hard to even imagine what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year."

The "likely", "may," and "maybe" there are doing a lot of lifting. Altman may have "something closer to religion" in his AGI assumptions, but cannot cast reason aside completely. Indeed, shorn of the excitable sci-fi language, he's not always claiming that much (don't we already have "robots that can do tasks in the real world"?). As for his most outlandish claims, Altman has learned to preface them with a word salad that could mean anything. Take this doozy: "In some big sense, ChatGPT is already more powerful than any human who has ever lived." Can I get a citation needed? 

Did Sam Altman just invite an AI environmental audit?

Altman's latest blog isn't all future-focused speculation. Buried within is the OpenAI CEO's first ever statement on ChatGPT's energy and water usage — and as with his needless drama over a Scarlett Johansson-like voice , opening that Pandora's Box may not go the way Altman thinks. 

Since ChatGPT exploded in popularity in 2023, OpenAI — along with main AI rivals Google and Microsoft — have stonewalled researchers looking for details on their data center usage. "We don't even know how big models like GPT are," Sasha Luccioni, climate lead at open-source AI platform HuggingFace, told me last year. "Nothing is divulged, everything is a company secret."

Altman finally divulged, kinda. In the middle of a blog post, in parentheses, with the preface "people are often curious about how much energy a ChatGPT query uses," the OpenAI CEO offers two stats: "the average query uses about 0.34 watt-hours ... and about 0.000085 gallons of water."

There's no more data offered to confirm these stats; Altman doesn't even specify which model of ChatGPT. OpenAI hasn't responded to multiple follow-up requests from multiple news outlets. Altman has an obvious interest in downplaying the amount of energy and water OpenAI requires, and he's already doing it here with a little sleight-of-hand. It isn't the average query that concerns researchers like Luccioni; it's the enormous amount of energy and water required to train the models in the first place.

But now he's shown himself to be responsive to the "often curious," Altman has less of a reason to stonewall. Why not release all the data so others can replicate his numbers, you know, like scientists do? Meanwhile, battles over data center energy and water usage are brewing across the US. Luccioni has started an AI Energy Leaderboard that shows how wildly open source AI models vary.

This is serious stuff, because companies don't like to spend more on energy usage than they need to, and because there's buy-in. Meta and (to a lesser extent) Microsoft and Google are already on the board. Can OpenAI afford not to be?

In the end, the answer depends on whether Altman is building a company or more of a religion.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Sam Altman OpenAI ChatGPT AI能源 人工智能
相关文章