TechCrunch News 6小时前
Why a new anti-revenge porn law has free speech experts alarmed
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国新颁布的《下架法案》旨在打击报复性色情和AI生成的深度伪造,要求平台在48小时内响应受害者下架请求,否则将承担责任。尽管该法案被广泛赞扬,但专家警告其模糊的语言、宽松的验证标准和紧迫的合规期限可能导致过度执法、合法内容审查,甚至监控。电子前沿基金会指出,大规模内容审核存在问题,可能审查重要言论。该法案可能被滥用,针对LGBTQ+群体的内容。平台可能过度审查,依赖AI监控,甚至可能扩展到加密消息,引发隐私担忧。

⚖️ 《下架法案》旨在打击非自愿的色情图片(包括真实和AI生成),要求平台在48小时内响应下架请求,否则将承担责任。这一法案的目的是保护受害者,但同时也引发了关于审查和言论自由的担忧。

⏱️ 法案规定平台必须在一年内建立一套移除非自愿亲密图像的流程。然而,请求下架只需要受害者或其代表的签名,无需照片ID或其他验证方式,这可能导致滥用,例如恶意下架LGBTQ+内容。

🤖 为了避免承担责任,平台可能会倾向于直接下架内容,而忽略了对内容是否为非自愿亲密图像或其他受保护言论的调查。此外,平台可能会更积极地使用AI监控内容,甚至可能扩展到加密消息,以减少未来需要下架的问题内容。

🛡️ 一些公司如Hive已经开发了AI工具来检测深度伪造和儿童性虐待材料,并与Reddit、Giphy等平台合作。这些工具可以在内容上传时进行监控,但同时也引发了关于隐私和审查的担忧。

📢 电子前沿基金会警告说,在一些政治家公开表示反对某些类型的内容(如批判种族理论、堕胎信息或气候变化信息)的情况下,两党都公开支持如此大规模的内容审核,令人感到不安。

Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes. 

The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance. 

“Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch.

Online platforms have one year to establish a process for removing nonconsensual intimate imagery (NCII). While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse.

“I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said. 

Senator Marsha Blackburn (R-TN), a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.” 

Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney.

Techcrunch event

Berkeley, CA | June 5

REGISTER NOW

Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim. 

Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim. 

Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity.

“This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement

McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future. 

Platforms are already using AI to monitor for harmful content.

Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material (CSAM). Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal. 

“We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.” 

Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community. 

A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim. 

McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage. 

Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging.

On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law. 

“And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.” 

While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS.

On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status. 

 “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

下架法案 报复性色情 深度伪造 内容审查 隐私
相关文章