少点错误 前天 10:38
My Interview With Cade Metz on His Reporting About Lighthaven
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文记录了纽约时报记者Cade Metz与学者ZMD就AI风险讨论的对话。ZMD批评Metz在关于“理性主义作为宗教”的报道中使用了“近乎宗教的”等带有倾向性的词语,认为这偏离了新闻报道的客观性。Metz则辩称,他的工作是向普通读者解释复杂的技术和观点,需要简化和转化信息。双方就AI发展的不确定性、风险论证的性质(如“信仰的飞跃” vs. “概率性论证”)以及如何准确传达AI领域专家的不同意见进行了深入探讨,尤其围绕Geoffrey Hinton对AI风险的看法,质疑了将AI风险简单归类为“宗教关切”的报道角度。

🧐 ZMD认为记者Cade Metz在关于“理性主义作为宗教”的报道中,使用“近乎宗教的”等词语,偏离了新闻客观性,应直接陈述“对AI的担忧”,以避免不必要的编辑化倾向。

🗣️ Metz辩称,作为记者,其职责是向不熟悉科技行业的普通读者解释复杂议题,需要简化并整合多方信息,有时会采用更易于理解的语言来传达技术发展的“信仰”或“飞跃”。

📊 ZMD指出,将AI风险的论证称为“信仰的飞跃”与“不确定的概率性论证”存在本质区别,并引用METR的研究数据(如AI能力任务完成时间翻倍的趋势)作为AI风险论证的实证基础,反驳了“信仰”的说法。

🤔 针对Geoffrey Hinton对AI风险的担忧,ZMD质疑Metz将AI风险视为“宗教关切”的论调,认为Hinton的观点并非源于非理性因素,而是基于其专业判断,这与Metz报道中试图构建的“理性主义者传播AI风险”的叙事相悖。

⚖️ ZMD认为,AI风险的讨论存在不同观点,双方都有理性且有依据的论证,记者应平等呈现,而非将一方的观点定性为“信仰”,这是一种不恰当的编辑化处理,未能公平反映AI领域内的辩论。

Published on August 17, 2025 2:30 AM GMT

On 12 August 2025, I sat down with New York Times reporter Cade Metz to discuss some criticisms of his 4 August 2025 article, "The Rise of Silicon Valley's Techno-Religion". The transcript below has been edited for clarity.


ZMD: In accordance with our meetings being on the record in both directions, I have some more questions for you.

I did not really have high expectations about the August 4th article on Lighthaven and the Secular Solstice. The article is actually a little bit worse than I expected, in that you seem to be pushing a "rationalism as religion" angle really hard in a way that seems inappropriately editorializing for a news article.

For example, you write, quote,

Whether they are right or wrong in their near-religious concerns about A.I., the tech industry is reckoning with their beliefs.

End quote. What is the word "near-religious" doing in that sentence? You could have just cut the word and just said, "their concerns about AI", and it would be a perfectly informative sentence.

CM: My job is to explain to people what is going on. These are laypeople. They don't necessarily have any experience with the tech industry or with your community or with what's going on here. You have to make a lot of decisions in order to do that, right? The job is to take information from lots and lots and lots of people who each bring something to the article, and then you consolidate that into a piece that tries to convey all that information. If you write a article about Google, Google is not necessarily going to agree with every word in the article.

ZMD: Right, I definitely understand that part. I'm definitely not demanding a puff piece about these people who I have also written critically about. But just in terms of like—

CM: But you and so many others in the community, who have been in the community for decades in some cases, years, use the same language. They use stronger language.

ZMD: Right, but so like—I'm just saying in terms of writing a news article, when you're trying to convey the who-what-when-where-why, "their concerns about AI", just with no adjective between "their" and "concerns", is much more straightforward.

CM: No, but people need to understand, okay, that the technology as it exists today, it is not clear how it would get to the point where it's going to, you know, destroy humanity, for instance. That is a belief that the average person doesn't understand. And when someone says that, they take it at face value, like ChatGPT is going to jump out of—

ZMD: That's not actually the argument, though.

CM: That's what I'm saying. People need to understand that the argument on some level, and people are going to debate this for years and years, or however long it takes, but that's a leap of faith, right?

ZMD: Yeah, that was actually my second question here. I was a little bit disappointed by the article, but the audio commentary was kind of worse. You open the audio commentary with:

We have arrived at a moment when many in Silicon Valley are saying that artificial intelligence will soon match the powers of the human brain, even though we have no hard evidence that will happen. It's an argument based on faith.

End quote. And just, these people have written hundreds of thousands of words carefully arguing why they think powerful AI is possible and plausibly coming soon.

CM: That's an argument.

ZMD: Right.

CM: It's an argument.

ZMD: Right.

CM: We don't know how to get there.

ZMD: Right.

CM: We do not—we don't know—

ZMD: But do you understand the difference between "uncertain probabilistic argument" and "leap of faith"? Like these are different things.

CM: I didn't say that. People need to understand that we don't know how to get there. There are trend lines that people see. There are arguments that people make. But we don't know how to get there. And people are saying it's going to happen in a year or two, when they don't know how to get there. There's a gap.

ZMD: Yes.

CM: And boiling this down in straightforward language for people, that's my job.

ZMD: Yeah, so I think we agree that we don't know how to get there. There are these arguments, and, you know, you might disagree with those arguments, and that's fine. You might quote relevant experts who disagree, and that's fine. You might think these people are being dishonest or self-deluding, and that's fine. But to call it "an argument based on faith" is different from those three things. What is your response to that?

CM: I've given my response.

ZMD: It doesn't seem like a very ...

CM: We're just saying the same thing.

ZMD: Yeah, but like I feel like there should be some way to break the deadlock of, we're just saying the same thing, like some—right?

CM: I feel like there should be a way to break lots of deadlocks, right?

ZMD: Because, for example, the Model Evaluation and Threat Research, METR, which was spun out of Paul Christiano's Alignment Research Center, they've been measuring what tasks AI models can successfully complete in terms of how long it would take a human to complete the task, and they're finding that the task length is doubling every seven months, with the idea being that when you have AI that can do intellectual tasks that would take humans a day, a week, a month, that could pose a threat in terms of its ability to autonomously self-replicate. Again, you might disagree with the methodology, but that seven month doubling time thing, which is one of the things people are looking at when they're writing these wild-sounding scenarios, that's empirical work on the existing technology.

CM: The same lab has also released a study saying that these LLMs actually slow coders down.

ZMD: Yeah, I saw that, too.

CM: Again, trend lines, okay, sometimes they slow down. Sometimes they stop. And these trend lines are also, they're logarithmic, they're not exponential.

ZMD: That's an important point, yeah.

CM: We agree on all this stuff.

ZMD: We agree on all this—

CM: It's about disagreeing about the best way to convey this to a person.

ZMD: I feel like if you said the thing you just said in the article, that would have been great. But the thing you said in the audio commentary was "an argument based on faith", which is not what you just said to me. Those are different things.

Also, speaking of this religion angle, in your previous book, Genius Makers, you told the story of Turing Award–winning deep learning pioneer Geoffrey Hinton. More recently, in 2023, Hinton left his position at Google specifically in order to speak up about AI risks. There was actually a nice article about this in The New York Times, "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". You might have seen that one.

CM: Yes, I did. I broke that story.

ZMD: That was the joke.

CM: I'm with you.

ZMD: But so Hinton has said that his independent impression of the existential threat is more than 50%. Doesn't that undermine this narrative you're trying to build of AI risk as being a religious concern spread by rationalists? Hinton clearly isn't getting this from having read Harry Potter and the Methods of Rationality.

CM: People get ...

ZMD: So are you proposing a model where like—

CM: Go back and read that article. Listen to the Daily episode I did about Geoff. I push back on so much of what he says. I say in the articles, many of his students push back. They don't agree with it. So this is what needs to be conveyed to people, that many people working on the same technology, who are very smart, and very well educated, and very experienced, completely disagree on what is happening.

ZMD: Right. That part, I definitely agree that that part is part of the story.

CM: But there is also this community that has driven a lot of what is happening here, and people need to understand that.

ZMD: Yeah, but it just seems that the situation in terms of people who think AI is a big risk and people who think AI is just another technology, it seems to me that that debate is symmetrical, in the sense that you have smart people on both sides who just disagree about what's happening, and if you're specifically pointing at one side and saying, this is an argument based on faith, then that's editorializing, in a way that's not what you should be doing as a reporter just trying to convey the debate to readers who don't know that this is happening.

CM: I'm pointing out a key part of the debate. I'm pointing that out.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI风险 新闻报道 理性主义 科技伦理 客观性
相关文章