TechCrunch News 2024年10月31日
Clashing approaches to combat AI’s ‘perpetual bulls**t machine’
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

TechCrunch Disrupt 2024上,关于对抗虚假信息的小组讨论热烈展开。三位小组成员对社交媒体和生成式AI提出批评,认为生成式AI使虚假信息的生产成本和传播成本降至零,形成完美循环,令人担忧。同时提到自愿限制和透明度报告形式的自我监管不足,还讨论了应避免因恐惧而采取过度措施失去AI的好处。

🎤Imran Ahmed指出,生成式AI使虚假信息的边际生产成本降为零,且其传播成本也为零,形成了一个不断产生、传播和评估改进的完美循环系统,这是非常令人担忧的情况。

💡Brandie Nonnecke表示,以自愿限制和透明度报告为形式的自我监管完全不足。这些报告给人一种他们在切实履行职责的错觉,但实际上可能只是掩盖了处理内容时的混乱。

🤔Pamela San Martin原则上同意前两者观点,但警告不要因恐惧而采取过度措施,以免失去AI带来的好处,虽然AI生成内容有所增加,但尚未使选举完全被其淹没。

The AI stage at TechCrunch Disrupt 2024 got off to a fiery but constructive start on a panel about combating disinformation. But in a spirited exchange of views tempered by expressions of respect and agreement, all three panelists had harsh words for social media and generative AI.

None was harsher, though, than Imran Ahmed, CEO of the Center for Countering Digital Hate.

“We’ve always had BS in politics, and a lot of politicians use lying as an art, a tool of doing politics. What we have now is is quantitatively different, and to such a scale that it’s like comparing the conventional arms race of BS in politics to the nuclear race,” he said.

“It’s the economics that have changed so radically: The marginal cost of the production of a piece of disinformation has been reduced to zero by generative AI, and the marginal costs of the distribution of disinformation [is also zero],” he continued. “So what you have, theoretically, is a perfect loop system in which generative AI is producing, it’s distributing, and then it’s assessing the performance — A/B testing and improving. You’ve got a perpetual bulls–t machine. That’s quite worrying!”

Brandie Nonnecke, director of UC Berkeley’s CITRIS Policy Lab, pointed out that self-regulation in the form of voluntary limits and transparency reports is totally insufficient.

“I don’t think that these transparency reports really do anything, in part because in these transparency reports, they’ll say, look at what a great job we’re doing: We removed 10s of 1000s of pieces of harmful content. Well, what didn’t you remove? What’s still floating around that you didn’t catch? It gives a false sense that they’re actually doing due diligence, when I think underneath that all is a big mess of them trying to figure out how to deal with all of this content,” she said.

Pamela San Martin, co-chair of the Facebook Oversight Board, agreed in principle but warned not to throw the baby out with the bath water. “I think that it would be completely untrue to say that any social media platform is doing everything they have to do — especially I would not say that about Meta,” she said.

“I agree what you said, but we thought this year that had 80 elections would be the year of AI and elections, that all the elections throughout the world would be flooded of AI deepfakes, that that would be what control the narrative,” she continued. “We have seen a rise in it, but we have not seen elections being completely flooded with AI generated content. Why do I say that? Not because I disagree, it is very concerning, but I also think that we have to keep in mind that if we start make taking measures out of fear, we will lose the good part of AI.”

October 30, 2024 – October 30, 2024

TechCrunch Disrupt 2024 has begun, with hundreds of speakers, hundreds of startups and exhibitors, plus thousands of attendees meeting in…

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

TechCrunch Disrupt 2024 虚假信息 生成式AI 自我监管
相关文章