少点错误 18小时前
AISN #58: Senate Removes State AI Regulation Moratorium
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本期AI安全通讯探讨了近期AI领域的重要进展。参议院撤回了限制州政府监管AI的提案,而两名联邦法官对使用版权书籍训练AI是否构成合理使用存在分歧。此外,文章还提及了AI在核框架中的应用、版权法面临的挑战、AI政策制定以及AI安全领域的其他动态,为读者提供了全面的视角。

🏛️参议院取消了共和党提出的限制州政府监管AI的提议。该提议曾试图禁止州政府在获得联邦宽带扩展资金时监管AI,但在参议院面临挑战后被撤回,最终以99比1的投票结果通过。

⚖️两名联邦法官对使用版权作品训练AI是否构成合理使用持有不同意见。其中一位法官认为,Anthropic使用版权书籍训练LLMs属于合理使用,而另一位法官虽然支持Meta,但对市场影响的评估持不同看法,认为未经授权复制版权作品训练AI模型可能违法。

📚法官们在“间接替代”是否属于版权法中的相关市场影响问题上存在分歧。Alsup法官认为,AI生成竞争作品与培训学生写作无异;Chhabria法官则认为,AI生成作品的速度和数量差异巨大,应考虑市场影响。

📢其他新闻包括:Michael C. Horowitz和Lauren A. Kahn讨论了将AI置于核框架中的问题;Laura González Salmerón探讨了生成式AI对版权法的压力;Pete Buttigieg强调了社会对AI准备不足的问题;以及加州大学伯克利分校的研究人员发布了新的网络安全基准CyberGym。

Published on July 3, 2025 5:26 PM GMT

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.

Senate Removes State AI Regulation Moratorium

The Senate removed a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI. The moratorium would have prohibited states from receiving federal broadband expansion funds if they regulated AI—however, it faced procedural and political challenges in the Senate, and was ultimately removed in a vote of 99-1. Here’s what happened.

A watered-down moratorium cleared the Byrd Rule. In an attempt to bypass the Byrd Rule, which prohibits policy provisions in budget bills, the Senate Commerce Committee revised the original moratorium to be a prerequisite for states to receive federal broadband expansion funds rather than a blanket restriction. On Wednesday, Senate Parliamentarian Elizabeth MacDonough judged that the moratorium would only clear the Byrd Rule if it was tied to only the new $500 million in federal broadband expansion funds provided by the reconciliation bill—not all $42.45 billion previously appropriated.

This significantly weakened the moratorium—even if it had been passed, states might have decided that regulating AI was worth foregoing new broadband expansion funds.

The moratorium moved to a vote in the Senate. On Saturday, the senate voted 51-49 to move to general debate on the reconciliation bill, beginning the process of a “vote-a-rama” which saw many amendments debated and voted on in rapid succession. Senators Josh Hawley and Maria Cantwell were expected to bring an amendment to remove the moratorium from the bill.

Ted Cruz and Sen. Marsha Blackburn—another critic of the original moratorium—were set to pitch a compromise draft that shortened the moratorium from ten to five years and exempt state legislation establishing internet protections. However, on Tuesday, Blackburn abandoned that compromise after Steve Bannon and others reportedly reached out to her.

Instead, she brought an amendment with Sen. Cantwell to remove the moratorium entirely. Lacking enough support, even Cruz voted for the amendment, which passed 99-1.

Sen. Blackburn cosponsored the Kids Online Safety Act last year. (Source.)

Even if the moratorium had survived the Senate, it could have faced an uphill battle in the House—Representatives Marjorie Taylor Greene and Thomas Massie came out against it, along with other prominent Republicans like Arkansas Governor Sarah Huckabee Sanders and Steve Bannon.

Judges Split on Whether Training AI on Copyrighted Material is Fair Use

Last week, two U.S. district judges decided cases involving Anthropic and Meta on the question of whether training LLMs on copyrighted works qualifies as fair use. While both judges sided with the AI companies, they sharply disagreed about how the Copyright Act should apply to similar cases—leaving legal precedent on the question ambiguous.

One judge ruled that training Anthropic’s Claude on copyrighted books is fair use. U.S. District Judge William Alsup granted a summary judgment that Anthropic using copyrighted books to train LLMs qualifies as fair use. The order held that three out of four of the factors considered when determining whether a given use of a copyrighted work is a fair use favored Anthropic’s use in training LLMs.

    The purpose and character of the use. The court held that using copyrighted books to train LLMs is highly transformative, favoring fair use.The nature of the copyrighted work. The books in question were expressive, pointing against fair use.The amount and substantiality of the portion used. The court held that it was reasonably necessary to use the entirety of books in training LLMs, favoring fair use.The effect of the use upon the potential market for or value of the copyrighted work. No exact copies or knockoffs resulted from the use of copyrighted books to train Claude, since Anthropic implemented guardrails to prevent Claude from exactly replicating the works on which it was trained. While the use may result in an “explosion” of AI-generated writing that competes with the copyrighted books, the court held that such a market effect doesn’t count under the Copyright Act.

Digitizing print books Anthropic lawfully bought is also protected—but piracy is not. Judge Alsup drew a sharp line between scanning paperbacks Anthropic had purchased and the millions of volumes it admitted downloading from pirate libraries. Turning a lawfully owned print copy into a PDF is fair use, but pirating books is not. That issue will proceed to trial.

In a case against Meta, another judge reached the opposite conclusion. While U.S. District Judge Vince Chhabria sided with Meta in its case, his order made clear he only did so because he believed the plaintiffs made the wrong arguments and presented the wrong evidence.

His analysis of whether using copyrighted books to train LLMs is fair use agrees with Judge Alsup’s on the first three factors—but sharply disagrees on the relevance of market effects. The upshot, he writes, is that “in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission.” He sided with Mate only because the plaintiffs failed to provide arguments or evidence showing that Meta’s LLMs resulted in market harm to their books.

The judges disagree on whether “indirect displacement” is a relevant market effect under the Copyright Act. Both orders assume that LLMs may now or soon be able to generate many competitors to human-written books, which could harm the market for human-written books.

Judge Alsup writes that the authors’ complaint about such an effect is “no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,” which is “not the kind of competitive or creative displacement that concerns the Copyright Act.”

However, Judge Chhabria responds that “using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take.” That is, he argues that a similarity in kind does not outweigh a vast difference in magnitude.

Higher courts will likely settle the dispute. While Judge Alsup’s order might have provided precedence for similar cases, Chhabria’s disagreement leaves precedent ambiguous. However, both decisions fall under the jurisdiction of the Ninth Circuit, which has yet to rule on AI fair use. The authors in Anthropic’s case, at least, indicated that they will appeal the decision to the Ninth Circuit—and, ultimately, the issue may be up to the Supreme Court to decide.

In Other News

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 版权 州政府监管 LLMs Fair Use
相关文章