Published on July 3, 2025 5:26 PM GMT
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Subscribe to receive future versions.
Senate Removes State AI Regulation Moratorium
The Senate removed a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI. The moratorium would have prohibited states from receiving federal broadband expansion funds if they regulated AI—however, it faced procedural and political challenges in the Senate, and was ultimately removed in a vote of 99-1. Here’s what happened.
A watered-down moratorium cleared the Byrd Rule. In an attempt to bypass the Byrd Rule, which prohibits policy provisions in budget bills, the Senate Commerce Committee revised the original moratorium to be a prerequisite for states to receive federal broadband expansion funds rather than a blanket restriction. On Wednesday, Senate Parliamentarian Elizabeth MacDonough judged that the moratorium would only clear the Byrd Rule if it was tied to only the new $500 million in federal broadband expansion funds provided by the reconciliation bill—not all $42.45 billion previously appropriated.
This significantly weakened the moratorium—even if it had been passed, states might have decided that regulating AI was worth foregoing new broadband expansion funds.
The moratorium moved to a vote in the Senate. On Saturday, the senate voted 51-49 to move to general debate on the reconciliation bill, beginning the process of a “vote-a-rama” which saw many amendments debated and voted on in rapid succession. Senators Josh Hawley and Maria Cantwell were expected to bring an amendment to remove the moratorium from the bill.
Ted Cruz and Sen. Marsha Blackburn—another critic of the original moratorium—were set to pitch a compromise draft that shortened the moratorium from ten to five years and exempt state legislation establishing internet protections. However, on Tuesday, Blackburn abandoned that compromise after Steve Bannon and others reportedly reached out to her.
Instead, she brought an amendment with Sen. Cantwell to remove the moratorium entirely. Lacking enough support, even Cruz voted for the amendment, which passed 99-1.
Even if the moratorium had survived the Senate, it could have faced an uphill battle in the House—Representatives Marjorie Taylor Greene and Thomas Massie came out against it, along with other prominent Republicans like Arkansas Governor Sarah Huckabee Sanders and Steve Bannon.
Judges Split on Whether Training AI on Copyrighted Material is Fair Use
Last week, two U.S. district judges decided cases involving Anthropic and Meta on the question of whether training LLMs on copyrighted works qualifies as fair use. While both judges sided with the AI companies, they sharply disagreed about how the Copyright Act should apply to similar cases—leaving legal precedent on the question ambiguous.
One judge ruled that training Anthropic’s Claude on copyrighted books is fair use. U.S. District Judge William Alsup granted a summary judgment that Anthropic using copyrighted books to train LLMs qualifies as fair use. The order held that three out of four of the factors considered when determining whether a given use of a copyrighted work is a fair use favored Anthropic’s use in training LLMs.
- The purpose and character of the use. The court held that using copyrighted books to train LLMs is highly transformative, favoring fair use.The nature of the copyrighted work. The books in question were expressive, pointing against fair use.The amount and substantiality of the portion used. The court held that it was reasonably necessary to use the entirety of books in training LLMs, favoring fair use.The effect of the use upon the potential market for or value of the copyrighted work. No exact copies or knockoffs resulted from the use of copyrighted books to train Claude, since Anthropic implemented guardrails to prevent Claude from exactly replicating the works on which it was trained. While the use may result in an “explosion” of AI-generated writing that competes with the copyrighted books, the court held that such a market effect doesn’t count under the Copyright Act.
Digitizing print books Anthropic lawfully bought is also protected—but piracy is not. Judge Alsup drew a sharp line between scanning paperbacks Anthropic had purchased and the millions of volumes it admitted downloading from pirate libraries. Turning a lawfully owned print copy into a PDF is fair use, but pirating books is not. That issue will proceed to trial.
In a case against Meta, another judge reached the opposite conclusion. While U.S. District Judge Vince Chhabria sided with Meta in its case, his order made clear he only did so because he believed the plaintiffs made the wrong arguments and presented the wrong evidence.
His analysis of whether using copyrighted books to train LLMs is fair use agrees with Judge Alsup’s on the first three factors—but sharply disagrees on the relevance of market effects. The upshot, he writes, is that “in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission.” He sided with Mate only because the plaintiffs failed to provide arguments or evidence showing that Meta’s LLMs resulted in market harm to their books.
The judges disagree on whether “indirect displacement” is a relevant market effect under the Copyright Act. Both orders assume that LLMs may now or soon be able to generate many competitors to human-written books, which could harm the market for human-written books.
Judge Alsup writes that the authors’ complaint about such an effect is “no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,” which is “not the kind of competitive or creative displacement that concerns the Copyright Act.”
However, Judge Chhabria responds that “using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take.” That is, he argues that a similarity in kind does not outweigh a vast difference in magnitude.
Higher courts will likely settle the dispute. While Judge Alsup’s order might have provided precedence for similar cases, Chhabria’s disagreement leaves precedent ambiguous. However, both decisions fall under the jurisdiction of the Ninth Circuit, which has yet to rule on AI fair use. The authors in Anthropic’s case, at least, indicated that they will appeal the decision to the Ninth Circuit—and, ultimately, the issue may be up to the Supreme Court to decide.
In Other News
- Michael C. Horowitz and Lauren A. Kahn argue that placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.Laura González Salmerón discusses how copyright law is under pressure from generative AI.Kristin O’Donoghue argues that a moratorium on state AI legislation would upend federalism and halt the experiments that drive smarter policy.Pete Buttigieg wrote a blog post arguing that AI presents “a fundamental change to our society—and we remain dangerously underprepared.”Researchers at UC Berkeley released CyberGym, a new cybersecurity benchmark. The LLMs they evaluated discovered 15 zero-day vulnerabilities in large software projects.A new report from the Forecasting Research Institute shows that experts and superforecasters predict that existing AI capabilities may substantially increase the risk of human-caused epidemics.
See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Subscribe to receive future versions.
Discuss