Published on July 26, 2025 2:46 AM GMT
Quite a lot of people are talking about doing "AI Safety Communications" to convince the US government to do things like take "National Security Risks in Frontier Models" seriously, invest in "AI Interpretability and Control", and build an "AI Evaluations Ecosystem". But uh...
The White House just released an AI Action plan, directing agencies to (among other things):
- Ensure that the U.S. Government is at the Forefront of Evaluating National Security Risks in Frontier ModelsInvest in AI Interpretability, Control, and Robustness BreakthroughsBuild an AI Evaluations Ecosystem
I haven't seen much discussion in this community about the recent AI Action Plan; maybe I missed it. (All I've seen is Zvi declaring it is "pretty good".)
In the absence of high profile discussion around the AI Action Plan, it kind of feels like people are mobilizing to try to tell the US Government stuff it has not just heard, but has already made a clear priority to work on.
There's nothing wrong with doing more comms, and if you're especially skilled in doing massive public outreach, then I guess do what you're good at.
But I feel like the place to have big marginal impact right now is to dive into areas that the Trump admin has already ordered the government to work on, and come up with great new ideas on how to implement these priorities as effectively as possible. These are complicated questions that people aren't sure how to solve; getting a bunch of smart people who are familiar with frontier models to help think things through seems like it would be super useful.
What would this concretely look like?
First, learn about policy levers. Random example, learn what the difference is between CISA and the NSC. I've been a bit shocked by how little super smart AI people know about government.
Second: think really hard about how to implement the AI Action priorities, and come up with some solid plans.
Third: Publicize your ideas via Twitter/X, Substack, and/or (most ideally) white papers with a respected think tank, newspaper, or academic journal.
To be super concrete, here's one of the policy agendas from the AI Action Plan for "Promote Secure-By-Design AI Technologies and Applications":
Led by DOD in collaboration with NIST at DOC and ODNI, continue to refine DOD’s Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits.
Led by ODNI in consultation with DOD and CAISI at DOC, publish an IC Standard on AI Assurance under the auspices of Intelligence Community Directive 505 on Artificial Intelligence.
Seems like there's good work to be done in:
- Figuring out what should be in "DOD’s Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits"Figuring out an IC Standard on AI Assurance under the auspices of Intelligence Community Directive 505 on Artificial Intelligence.Defining what roles the DOD (Department of Defense), NIST (National Institute of Standards and Technology), DOC (Department of Commerce), and ODNI (Office of the Director of National Intelligence) should take in implementing this agenda
Of course the agencies will be working on this internally. But it's the role of think tanks, academia, and broader civil society to help generate great policy ideas to help guide what the government tries to do.
Considering how much AI technical talent sits outside of government, this seems like an area where having external experts chime in could be especially valuable.
Discuss