Published on June 17, 2025 6:02 PM GMT
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Subscribe to receive future versions.
The RAISE Act
New York may soon become the first state to regulate frontier AI systems. On June 12, the state’s legislature passed the Responsible AI Safety and Education (RAISE) Act. If New York Governor Kathy Hochul signs it into law, the RAISE Act will be the most significant state AI legislation in the U.S.
New York’s RAISE Act imposes four guardrails on frontier labs: developers must publish a safety plan, hold back unreasonably risky models, disclose major incidents, and face penalties for non-compliance.
- Publish and maintain a safety plan. Before deployment, developers must post a redacted “safety and security protocol,” transmit the plan to both the attorney general and the Division of Homeland Security and Emergency Services, keep the unredacted version—plus all supporting test data—for five years, and review the plan each year.Withhold any model that presents an “unreasonable risk of critical harm.” Developers must delay their release and work to reduce risk if evaluations show the system poses an unreasonable risk of causing at least 100 deaths or $1 billion in damage through weapons of mass destruction or automated criminal activity.Report safety incidents within seventy-two hours. If developers discover the theft of model weights, evidence of dangerous autonomous behavior, or other events that demonstrably raises the risk of critical harm, they must report their discovery to state officials within three days.Penalties for non-compliance. The NY attorney general may seek up to $10 million for a first violation and $30 million for subsequent violations.
The RAISE Act only regulates the largest developers. Mirroring California’s SB 1047—vetoed by Governor Gavin Newsom in 2024—the Act covers any model costing at least $100 million in compute.
Obligations fall on developers that have trained at least one frontier model and spent a cumulative $100 million on such training—and on anyone who later buys the model’s full intellectual-property rights. Accredited colleges are exempt when conducting academic research, but commercial spin-outs are not. These carve-outs serve to focus the legal burden onto the handful of firms capable of creating catastrophic harms.
While New York acts, the U.S. Congress weighs a federal moratorium on state AI regulation. The “One Big Beautiful Bill Act,” the budget reconciliation package the U.S. House of Representatives approved on May 22, contained a 10‑year federal moratorium on “any law or regulation” that “restricts, governs or conditions” the design, deployment, or use of AI systems.
The moratorium was originally unlikely to pass the Senate’s Byrd Rule, which prohibits policy provisions from being included in budget reconciliation bills. The Senate Commerce Committee, chaired by Cruz, recently revised the moratorium such that it would be a prerequisite for states to receive billions in federal broadband expansion funds. This change could potentially bypass the Byrd rule.
However, the proposed moratorium has drawn criticism from some Republican lawmakers—including the House Freedom Caucus—who may be crucial to its survival. A recent poll found that proposal appears to be unpopular with the party’s base, with 50 percent of Republican voters saying they opposed the moratorium compared to 30 percent saying they supported it. Last week, a bipartisan group of 260 state legislators also wrote a letter to congress opposing the moratorium.
The RAISE Act isn’t law yet. Although both chambers have passed the bill, they have not yet delivered it to Governor Kathy Hochul—a step lawmakers can take at any point during 2025.
Once the bill is finally sent, Hochul will have up to 30 days to sign it, veto it, or negotiate “chapter amendments,” the back-and-forth revisions governors often use to tweak language before giving final approval. Until that clock starts, the measure sits in limbo, and its ultimate shape—possibly even its survival—remains an open question.
In Other News
Government
- Secretary of Commerce Howard Lutnick announced plans to reform the U.S. AI Safety Institute into the Center for AI Standards and Innovation (CAISI).
Industry
- Google released an upgraded preview of Gemini 2.5 Pro, which scores the highest on most benchmarks.Sam Altman wrote a new blog post discussing how an intelligence recursion would be rapid but “gentle”: "If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different."Meta invested $14.3 billion in Scale AI and hired its CEO Alexandr Wang to run a new superintelligence team.
Civil Society
- David “davidad” Dalrymple argues that in order to fulfill the potential of AI in safety-critical domains like energy grids, we need to develop more robust, mathematical guarantees of safety.Kevin Frazier writes about how in the absence of federal legislation, the burden of managing AI risks has fallen to judges and state legislators—actors lacking the tools needed to ensure consistency, enforceability, or fairness.Vanessa Bates Ramirez writes that, while AI is increasingly being used for emotional support, research from OpenAI and MIT raises concerns that it may leave some users feeling even worse.Nora Ammann and Sarah Hastings-Woodhouse discuss how assurance technologies could help de-escalate an AI arms race.Epoch AI released a dataset that maps the world’s largest AI supercomputers.Yoshua Bengio launched LawZero, a nonprofit advancing “safe-by-design” AI.
See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Subscribe to receive future versions.
Discuss