MIT Technology Review » Artificial Intelligence 9小时前
Five things you need to know about AI right now
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文作者在SXSW London的演讲“你需要知道的五件事关于AI”,分享了他对当前AI领域五大核心观点的个人见解。作者指出,生成式AI的进步速度惊人,已在音乐、代码、视频等多个领域展现出强大能力,并强调不应低估其发展。他认为AI的“幻觉”并非错误,而是其生成机制的本质,并提醒读者要理解其局限性。此外,AI对能源的巨大需求日益凸显,但相关数据却不透明。作者还提到,尽管我们知道如何构建大型语言模型,但其内部工作机制仍是未解之谜,这影响了我们对其能力和行为的准确判断。最后,他质疑了“通用人工智能”(AGI)的概念,认为其定义模糊且缺乏证据支持,呼吁人们在惊叹AI进步的同时,也保持审慎和批判性思维。

🌟 生成式AI能力惊人且发展迅速:作者以音乐领域为例,说明AI生成内容已达到难以分辨真伪的程度,并指出这种能力正迅速扩展到代码、机器人、蛋白质合成和视频等多个领域,强调不应低估其发展潜力。

💡 AI的“幻觉”是特性而非缺陷:作者解释说,AI的“幻觉”是其生成模型训练方式决定的,即模型被训练来“编造”内容。关键在于,这些编造的内容有时能惊人地符合现实,因此应将其视为AI的内在特性来理解,而非需要修复的bug。

⚡ AI是能源消耗大户且需求不断增长:训练大型AI模型需要巨大能量,但更重要的是,随着数亿用户日常使用AI服务,其整体能源消耗也在急剧增加。作者指出,科技公司在能源消耗数据方面信息不透明,但相关研究正逐步揭示这一问题。

❓ 大型语言模型的工作原理仍是未解之谜:尽管我们掌握了构建和优化大型语言模型的方法,但其内部运作机制仍然不清晰,如同来自外星的科技。这种认知上的空白意味着我们难以完全预测其能力、控制其行为,也无法彻底理解“幻觉”等现象。

🚫 “通用人工智能”(AGI)概念模糊且缺乏实证:作者认为,AGI的定义不清晰且存在循环论证,即用“智能”来定义“智能”。他指出,人们对AGI的期待更多是建立在对AI持续进步的信心上,而非实际证据,并提醒AI目前仍是模仿人类行为的机器,存在严重缺陷。

Last month I gave a talk at SXSW London called “Five things you need to know about AI”—my personal picks for the five most important ideas in AI right now. 

I aimed the talk at a general audience, and it serves as a quick tour of how I’m thinking about AI in 2025. I’m sharing it here in case you’re interested. I think the talk has something for everyone. There’s some fun stuff in there. I even make jokes!

The video is now available (thank you, SXSW London). Below is a quick look at my top five. Let me know if you would have picked different ones!

1. Generative AI is now so good it’s scary.

Maybe you think that’s obvious. But I am constantly having to check my assumptions about how fast this technology is progressing—and it’s my job to keep up. 

A few months ago, my colleague—and your regular Algorithm writer—James O’Donnell shared 10 music tracks with the MIT Technology Review editorial team and challenged us to pick which ones had been produced using generative AI and which had been made by people. Pretty much everybody did worse than chance.

What’s happening with music is happening across media, from code to robotics to protein synthesis to video. Just look at what people are doing with new video-generation tools like Google DeepMind’s Veo 3. And this technology is being put into everything.

My point here? Whether you think AI is the best thing to happen to us or the worst, do not underestimate it. It’s good, and it’s getting better.

2. Hallucination is a feature, not a bug.

Let’s not forget the fails. When AI makes up stuff, we call it hallucination. Think of customer service bots offering nonexistent refunds, lawyers submitting briefs filled with nonexistent cases, or RFK Jr.’s government department publishing a report that cites nonexistent academic papers. 

You’ll hear a lot of talk that makes hallucination sound like it’s a problem we need to fix. The more accurate way to think about hallucination is that this is exactly what generative AI does—what it’s meant to do—all the time. Generative models are trained to make things up.

What’s remarkable is not that they make up nonsense, but that the nonsense they make up so often matches reality. Why does this matter? First, we need to be aware of what this technology can and can’t do. But also: Don’t hold out for a future version that doesn’t hallucinate.

3. AI is power hungry and getting hungrier.

You’ve probably heard that AI is power hungry. But a lot of that reputation comes from the amount of electricity it takes to train these giant models, though giant models only get trained every so often.

What’s changed is that these models are now being used by hundreds of millions of people every day. And while using a model takes far less energy than training one, the energy costs ramp up massively with those kinds of user numbers. 

ChatGPT, for example, has 400 million weekly users. That makes it the fifth-most-visited website in the world, just after Instagram and ahead of X. Other chatbots are catching up. 

So it’s no surprise that tech companies are racing to build new data centers in the desert and revamp power grids.

The truth is we’ve been in the dark about exactly how much energy it takes to fuel this boom because none of the major companies building this technology have shared much information about it. 

That’s starting to change, however. Several of my colleagues spent months working with researchers to crunch the numbers for some open source versions of this tech. (Do check out what they found.)

4. Nobody knows exactly how large language models work.

Sure, we know how to build them. We know how to make them work really well—see no. 1 on this list.

But how they do what they do is still an unsolved mystery. It’s like these things have arrived from outer space and scientists are poking and prodding them from the outside to figure out what they really are.

It’s incredible to think that never before has a mass-market technology used by billions of people been so little understood.

Why does that matter? Well, until we understand them better we won’t know exactly what they can and can’t do. We won’t know how to control their behavior. We won’t fully understand hallucinations.

5. AGI doesn’t mean anything.

Not long ago, talk of AGI was fringe, and mainstream researchers were embarrassed to bring it up. But as AI has got better and far more lucrative, serious people are happy to insist they’re about to create it. Whatever it is.

AGI—or artificial general intelligence—has come to mean something like: AI that can match the performance of humans on a wide range of cognitive tasks.

But what does that mean? How do we measure performance? Which humans? How wide a range of tasks? And performance on cognitive tasks is just another way of saying intelligence—so the definition is circular anyway.

Essentially, when people refer to AGI they now tend to just mean AI, but better than what we have today.

There’s this absolute faith in the progress of AI. It’s gotten better in the past, so it will continue to get better. But there is zero evidence that this will actually play out. 

So where does that leave us? We are building machines that are getting very good at mimicking some of the things people do, but the technology still has serious flaws. And we’re only just figuring out how it actually works.

Here’s how I think about AI: We have built machines with humanlike behavior, but we haven’t shrugged off the habit of imagining a humanlike mind behind them. This leads to exaggerated assumptions about what AI can do and plays into the wider culture wars between techno-optimists and techno-skeptics.

It’s right to be amazed by this technology. It’s also right to be skeptical of many of the things said about it. It’s still very early days, and it’s all up for grabs.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 生成式AI AI发展 AI伦理 AGI
相关文章