Published on July 20, 2025 1:30 AM GMT
Thanks to @Manuel Allgaier of AI Safety Berlin for his suggestion to write this post and his helpful feedback. And thanks to LW/AI Alignment Moderator Oliver for looking over the post.
LessWrong and AI safety have a unique opportunity: The EU is funding important projects to research AI safety, ranging from AI risk modelling, AI accountability, driverless transport, robotics, to information security and many other topics.
Here’s the problem: We’re not applying.
I work in EU procurement support and I routinely see new AI tenders worth millions [see below] go to the same standard consultants. Most have no idea about the issues involved and I doubt they care much. They know the bureaucratic skills and paperwork inside out so they win by default.
What they lack is AI expertise, which our community has accumulated over more than a decade. I and my EU procurement company will gladly help (for free) anybody in the LW community bridge the application experience gap. But it will only happen if some of you out there take action.
I don’t think the AI safety battle is lost but it does require that the most qualified people and organisations win these projects, get resources, and gain the network and influence to shape AI in Europe (and the world). This isn’t just an opportunity to put our best resources into tackling AI issues but it’s also a chance for your organisations to win big contracts ranging on average from € 500k-€9M+ and lasting a minimum of a year.
Case example: EU AI Safety tender worth €9M
To illustrate the opportunities we are missing I want to share this recent tender with you.
This tender comes from the European Commission and is worth €9M and would be perfect for our experts. The purpose of the tender is to test the safety of the large AI models in 6 areas:
- CBRN Risks (1.8M€)
Risks of AI models helping in making or using chemical, biological, radiological, or nuclear weapons.Cyber Offense Risks (2M€)
Risks of AI being used to create or scale cyberattacks, find security holes, or generate hacking tools.Loss of Control Risks (1.8M€)
Risks from AI systems becoming too autonomous, deceptive, or hard to control or align with human values.Harmful Manipulation Risks (1.2M€)
Risks of AI manipulating people (e.g. through persuasive conversations or deceptive advice), especially to cause harm.Sociotechnical Risks (1.2M€)
Broader social risks such as bias, discrimination, threats to freedom of speech, or health that can emerge at scale.Agentic Evaluation Interface (1.08M€)
A software system to test how AI models act as agents—making decisions, interacting with environments (e.g. browsers or operating systems), and handling complex tasks.
Each of the 6 areas are sub-tenders you can bid for individually, so you don’t have to bid for the whole tender. You can find it here:
https://ted.europa.eu/en/notice/-/detail/272332-2025
What are the criteria?
I have worked in the public tender world for over a year, and talked with many organisations from startups to corporations. I have seen many small startups fail because they didn’t understand the public tender game. So I will try to be realistic about what you need to win these projects:
- If your org is <15 people you need to apply with a partner organisation
If your org is 15-50 people: Might be able to apply alone.>50 people & >3 years old: Apply by yourselfExperience: Your organisation or institute needs at least one significant reference project. Publications on AI will also help.Organisation type: You can be a private business, startup, SME, corporation, institute, non-profit, or a university.
Got questions? Get in contact
We can advise you if you are fit for a particular tender project and help you with the bid submission too. So if you decide to apply you won’t be doing it alone.
Unsure? Questions? I could organize an Q&A webinar session for interested parties in the LW, EA & AI Safety community. AI Safety Berlin offered to host it. I would present some of the up-and-coming AI tenders and you are free to ask any questions about tenders. I’m willing to help your organisation for free because I think this will help strengthen the AI Safety movement. If you’d join such a webinar, please express interest in the comments or via email to u>Connect@Tendery.ai</u. I’d organize the Q&A session if at least 3-4 people express interest (probably at 5/6pm CET, let me know if that time does not work for you)
Otherwise, write your questions to me directly at Connect@Tendery.ai and be sure to write AI Safety in the subject line and let me know if you want to participate in the webinar.
For more info, see my Linkedin.com/in/arustamian/ and www.tendery.ai/en
Discuss