Published on June 16, 2025 12:50 PM GMT
The RAISE Act has overwhelmingly passed the New York Assembly (95-1 among Democrats and 24-21 among Republicans) and New York Senate (37-1 among Democrats, 21-0 among Republicans).Governor Kathy Hochul now has to decide whether or not to sign it, which she has 10 non-Sunday days to do once the bill is delivered (30 if they’re out of session), but the bill might not be delivered for six months.The aim of this post, now that we are seeing increasing public discussion, is to go through the bill to understand exactly what the bill would and would not do.
Overall Take
The RAISE Act is centrally a transparency bill. It requires frontier model developers to maintain, publish and adhere to (one might say ‘open source’ except that they can redact details for various reasons) a safety and security protocol (SSP) that outlines how they will, before releasing their frontier models, take appropriate steps to reduce risk of critical harm (100 casualties or 1 billion in damages) caused or materially enabled by those models. It must designate senior people as responsible for implementation.RTFB: The RAISE Act
There are two big advantages we have in reading the RAISE Act.- It is short and simple. We’ve analyzed similar things before.
Definitions: Frontier Model
These are mostly standard. The AI definition has been consistent for a while. Compute cost is defined as the published market price cost of cloud compute, as reasonably assessed by the person doing the training, which is as clear and generous as one could hope.The most important definition is ‘frontier model’:6. “Frontier model” means either of the following:(a) an artificial intelligence model trained using greater than 10^26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds one hundred million dollars;OR(b) an artificial intelligence model produced by applying knowledge distillation to a frontier model as defined in paragraph (a) of this subdivision, provided that the compute cost for such model produced by applying knowledge distillation exceeds five million dollars.The first provision will centrally be ‘you spent $100 million dollars.’ Which remains a lot of dollars, and means this will only apply to a handful of frontier labs. But also note that 10^26 will for a while remain a lot of FLOPS. Epoch looked at this question, and also estimates the costs of various models, with the only current model over 10^26 likely being Grok 3 (o3-pro suggests it is not impossible that Gemini Ultra or a few others might just barely also qualify, although I find this highly unlikely).The question is the second provision. How often will companies make distillations that cost more than $5 million and result in ‘similar or equivalent capabilities’ to the original, as required by the definition of distillation used here?o3-pro believes the current number of such models, even without considering the capabilities requirement, is probably zero (the possible exception is Claude Haiku, if you think it has sufficiently comparable capabilities). It anticipates the number of $5 million distillations will not remain zero, and expects the distillations to mostly (but not entirely) be from the same companies releasing the $100 million frontier models.Its baseline scenario is by 2029, there will be ~6 American frontier-trainers, in particular OpenAI, DeepMind, Anthropic, Meta, xAI and then maybe Amazon or Apple or perhaps an open source collective, and ~6 more distillers on top of that passing the $5 mark, starting with Cohere, then maybe Databricks or Perplexity.A ‘large developer’ means spending a combined $100 million in training compute, or someone who buys the full intellectual rights to the results of that, with academic institutions doing research excluded.This bill would have zero impact on everyone else.So yes, there will be talk about how this will be ‘more difficult’ for ‘smaller’ companies. But by ‘smaller’ companies we mean a handful of large companies, and by ‘more difficult’ we mean a tiny fraction of overall costs. And as always, please point to the thing that you would have to do, that you don’t think is worth doing, or is even a substantial impact on their business costs?Bill opponents, of course, are telling the same lies about this they told about SB 1047. Brianna January of the ‘Chamber of Progress’ calls this ‘an eviction notice for New York’s 9,000 AI startups,’ saying it ‘would send AI innovators packing,’ when exactly zero of these 9,000 startups would have to lift a single finger in response to this bill.This is pure bad faith Obvious Nonsense, and you should treat anyone who says similar things accordingly. (The other Obvious Nonsense claim here is that the bill was ‘rushed’ and lacked a public hearing. The bill very much followed normal procedures and had debate on the floor, the bill was in the public pipeline for months, and bills in New York do not otherwise get pubic hearings, that’s a non sequitur.)
Definitions: Critical Harm
“Critical harm” means the death or serious injury of one hundred or more people or at least one billion dollars of damages to rights in money or property caused or materially enabled by a large developer’s use, storage, or release of a frontier model, through either of the following:(a) The creation or use of a chemical, biological, radiological, or nuclear weapon; or(b) An artificial intelligence model engaging in conduct that does both of the following:(i) Acts with no meaningful human intervention; and(ii) Would, if committed by a human, constitute a crime specified in the penal law that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.A harm inflicted by an intervening human actor shall not be deemed to result from a developer’s activities unless such activities were a substantial factor in bringing about the harm, the intervening human actor’s conduct was reasonably foreseeable as a probable consequence of the developer’s activities, and could have been reasonably prevented or mitigated through alternative design, or security measures, or safety protocols.We have ‘caused or materially enabled’ and also ‘substantial factor’ and ‘harm that mitigations could have reasonably prevented’ and either 100 serious injuries or a billion dollars in damage as the thresholds, and either the act has to be autonomous, be a CBRN risk, or constitute a crime in the penal law.That seems like a robust way of saying ‘if you trigger this provision you screwed up?’
Definition: Safety Incident
They have to be reported, so what exactly are they?“Safety incident” means a known incidence of critical harmOR an incident of the following kinds that occurs in such a way that it provides demonstrable evidence of an increased risk of critical harm:The incidence of an actual critical harm is clear.The second half of the definition has two halves.A frontier model autonomously engaging in behavior other than at the request of a user; Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; The critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; Unauthorized use of a frontier model.
- It has to involve one of the four things listed. It has to provide demonstrable evidence of an increased risk of critical harm.
You Have To Report Safety Incidents The Way We Report Cybersecurity Breaches
As in, within 72 hours of any safety incidents, you have to notify the attorney general and DHSES. This is the common standard used for cybersecurity breaches. You have to include:- The date of the incident. Why it qualifies as a safety incident. ‘A short and plain statement describing the safety incident.’
Definition: Safety and Security Protocol (SSP)
What are we actually asking companies to produce, exactly? A documentation and description of technical and organizational protocols that if fully implemented would:- ‘Appropriately reduce the risk of critical harm.’ ‘Appropriately reduce the risk of’ unauthorized access to or misuse of the model weights ‘leading to critical harm.’ Describe a detailed test procedure to evaluate potential misuse or loss of control or combination with other software to potentially cause critical harm. Enable compliance with this article. Designate senior personnel to be responsible for ensuring compliance.
You Have To Have And Implement An SSP
Before deploying (meaning externally, as in giving a third party access) to a frontier model, the developer must write and implement an SSP, retain an up-to-date copy of that SSP, conspicuously publish a redacted copy of the SSP, give the attorney general and DHSES access upon request to the full SSP and retain copies of all your test results sufficient to allow third-party replication.As always, there are no specific requirements for the SSP, other than that it must ‘appropriately reduce the risk’ of critical harms, both directly or through unauthorized access, and that it spell out your testing procedure, and that you actually have someone ensure you use it. If you want to write the classic ‘lol we’re Meta, we don’t run tests, full open weights release without them seems appropriate, I’m sure it will be fine’ you can do that, although you might not like what happens when people notice you did that, or when the risks materialize, or potentially the AG notices you’re not taking the appropriate actions and sues you.You need to conduct an annual review of the SSP to adjust for increased model capabilities, and make and publish any appropriate adjustments. Seems wise.You Have To Not Release Models That Would Create An Unreasonable Risk Of Critical Harm
I for one think that if your model would create an unreasonable risk of critical harm then that means you shouldn’t release it. But that’s just me.You Shall Not Knowingly Make False Or Materially Misleading Statements or Omissions
Again, yeah, I mean, I hope that stands to reason.What Are The Consequences?
The attorney general can bring a civil action with penalties of:- $10 million for the first violation, $30 million for additional ones. Injunctive or declaratory relief.
Discuss