TechCrunch News 02月16日
Open source LLMs hit Europe’s digital sovereignty roadmap
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenEuroLLM项目旨在开发一系列覆盖所有欧盟语言的“真正”开源大语言模型,以推动欧洲的数字主权。该项目由20多个组织合作,预算为3740万欧元,旨在构建能够保护欧盟语言和文化多样性的基础模型。尽管面临着参与方众多、预算相对有限以及与其他类似项目重名等挑战,OpenEuroLLM仍希望通过利用现有数据集和高性能计算资源,在2026年中期发布首个版本,并在2028年完成最终迭代。项目致力于尽可能地开放数据和模型,同时遵守欧盟的AI法规。

🇪🇺OpenEuroLLM项目目标是开发一系列覆盖所有欧盟语言的开源大语言模型,旨在保护欧洲的语言和文化多样性,并推动欧洲的数字主权。

💰该项目预算为3740万欧元,资金主要来自欧盟的数字欧洲计划,用于构建模型本身,而更广泛的EuroHPC项目则拥有约70亿欧元的预算,用于支持相关的计算资源。

🤝OpenEuroLLM由20多个组织合作,包括学术界、研究机构和企业,但同时也面临着参与方众多,可能导致目标分散的质疑。同时,法国AI独角兽Mistral未参与该项目。

💾该项目将利用HPLT项目积累的数据和经验,HPLT项目已发布包含4.5PB网络爬取数据和超过200亿份文档的数据集,OpenEuroLLM将在此基础上添加更多数据。

⚖️OpenEuroLLM致力于尽可能地开放数据和模型,但由于版权和AI法规的限制,可能需要对部分训练数据进行限制访问,以确保模型的质量和合规性。

Large language models (LLMs) landed on Europe’s digital sovereignty agenda with a bang last week, as news emerged of a new program to develop a series of “truly” open source LLMs covering all European Union languages.

This includes the current 24 official EU languages, as well as languages for countries currently negotiating for entry to the EU market, such as Albania. Future-proofing is the name of the game.

OpenEuroLLM is a collaboration between some 20 organizations, co-led by Jan Hajič, a computational linguist from the Charles University in Prague, and Peter Sarlin, CEO and co-founder of Finnish AI lab Silo AI, which AMD acquired last year for $665 million.

The project fits a broader narrative that has seen Europe push digital sovereignty as a priority, enabling it to bring mission-critical infrastructure and tools closer to home. Most of the cloud giants are investing in local infrastructure to ensure EU data stays local, while AI darling OpenAI recently unveiled a new offering that allows customers to process and store data in Europe.

Elsewhere, the EU recently signed an $11 billion deal to create a sovereign satellite constellation to rival Elon Musk’s Starlink.

So OpenEuroLLM is certainly on-brand.

However, the stated budget just for building the models themselves is €37.4 million, with roughly €20 million coming from the EU’s Digital Europe Programme — a drop in the ocean compared to what the giants of the corporate AI world are investing. The actual budget is more when you factor in funding allocated for tangential and related work, and arguably the biggest expense is compute. The OpenEuroLLM project’s partners include EuroHPC supercomputer centers in Spain, Italy, Finland, and the Netherlands — and the broader EuroHPC project has a budget of around €7 billion.

But the sheer number of disparate participating parties, spanning academia, research, and corporations, have led many to question whether its goals are achievable. Anastasia Stasenko, co-founder of LLM company Pleias, questioned whether a “sprawling consortia of 20+ organizations” could have the same measured focus of a homegrown private AI firm.

“Europe’s recent successes in AI shine through small focused teams like Mistral AI and LightOn — companies that truly own what they’re building,” Stasenko wrote. “They carry immediate responsibility for their choices, whether in finances, market positioning, or reputation.”

The OpenEuroLLM project is either starting from scratch or it has a head start — depending on how you look at it.

Since 2022, Hajič has also been coordinating the High Performance Language Technologies (HPLT) project, which has set out to develop free and reusable datasets, models, and workflows using high-performance computing (HPC). That project is scheduled to end in late 2025, but it can be viewed as a sort of “predecessor” to OpenEuroLLM, according to Hajič, given that most of the partners on HPLT (aside from the U.K. partners) are participating here, too.

“This [OpenEuroLLM] is really just a broader participation, but more focused on generative LLMs,” Hajič said. “So it’s not starting from zero in terms of data, expertise, tools, and compute experience. We have assembled people who know what they’re doing — we should be able to get up to speed quickly.”

Hajič said that he expects the first version(s) to be released by mid-2026, with the final iteration(s) arriving by the project’s conclusion in 2028. But those goals might still seem lofty when you consider that there isn’t much to poke at yet beyond a bare-bones GitHub profile.

“In that respect, we are starting from scratch — the project started on Saturday [February 1],” Hajič said. “But we have been preparing the project for a year [the tender process opened in February 2024].”

From academia and research, organizations spanning Czechia, the Netherlands, Germany, Sweden, Finland, and Norway are part of the OpenEuroLLM cohort, in addition to the EuroHPC centers. From the corporate world, Finland’s AMD-owned AI lab Silo AI is on board, as are Aleph Alpha (Germany), Ellamind (Germany), Prompsit Language Engineering (Spain), and LightOn (France).

One notable omission from the list is that of French AI unicorn Mistral, which has positioned itself as an open source alternative to incumbents such as OpenAI. While nobody from Mistral responded to TechCrunch for comment, Hajič did confirm that he tried to initiate conversations with the startup, but to no avail.

“I tried to approach them, but it hasn’t resulted in a focused discussion about their participation,” Hajič said.

The project could still gather new participants as part of the EU program that’s providing funding, though it will be limited to EU organizations. This means that entities from the U.K. and Switzerland won’t be able to take part. This flies in contrast to the Horizon R&D program, which the U.K. rejoined in 2023 after a prolonged Brexit stalemate and which provided funding to HPLT.

The project’s top-line goal, as per its tagline, is to create: “A series of foundation models for transparent AI in Europe.” Additionally, these models should preserve the “linguistic and cultural diversity” of all EU languages — current and future.

What this translates to in terms of deliverables is still being ironed out, but it will likely mean a core multilingual LLM designed for general-purpose tasks where accuracy is paramount. And then also smaller “quantized” versions, perhaps for edge applications where efficiency and speed are more important.

“This is something we still have to make a detailed plan about,” Hajič said. “We want to have it as small but as high-quality as possible. We don’t want to release something which is half-baked, because from the European point-of-view this is high-stakes, with lots of money coming from the European Commission — public money.”

While the goal is to make the model as proficient as possible in all languages, attaining equality across the board could also be challenging.

“That is the goal, but how successful we can be with languages with scarce digital resources is the question,” Hajič said. “But that’s also why we want to have true benchmarks for these languages, and not to be swayed toward benchmarks which are perhaps not representative of the languages and the culture behind them.“

In terms of data, this is where a lot of the work from the HPLT project will prove fruitful, with version 2.0 of its dataset released four months ago. This dataset was trained 4.5 petabytes of web crawls and more than 20 billion documents, and Hajič said that they will add additional data from Common Crawl (an open repository of web-crawled data) to the mix.

In traditional software, the perennial struggle between open source and proprietary revolves around the “true” meaning of “open source.” This can be resolved by deferring to the formal “definition” as per the Open Source Initiative, the industry stewards of what are and aren’t legitimate open source licenses.

More recently, the OSI has formed a definition of “open source AI,” though not everyone is happy with the outcome. Open source AI proponents argue that not only models should be freely available, but also the datasets, pretrained models, weights — the full shebang. The OSI’s definition doesn’t make training data mandatory, because it says AI models are often trained on proprietary data or data with redistribution restrictions.

Suffice it to say, the OpenEuroLLM is facing these same quandaries, and despite its intentions to be “truly open,” it will probably have to make some compromises if it’s to fulfill its “quality” obligations.

“The goal is to have everything open. Now, of course, there are some limitations,” Hajič said. “We want to have models of the highest quality possible, and based on the European copyright directive we can use anything we can get our hands on. Some of it cannot be redistributed, but some of it can be stored for future inspection.”

What this means is that the OpenEuroLLM project might have to keep some of the training data under wraps, but be made available to auditors upon request — as required for high-risk AI systems under the terms of the EU AI Act.

“We hope that most of the data [will be open], especially the data coming from the Common Crawl,” Hajič said. “We would like to have it all completely open, but we will see. In any case, we will have to comply with AI regulations.”

Another criticism that emerged in the aftermath of OpenEuroLLM’s formal unveiling was that a very similar project launched in Europe just a few short months previous. EuroLLM, which launched its first model in September and a follow-up in December, is co-funded by the EU alongside a consortium of nine partners. These include academic institutions such as the University of Edinburgh and corporations such as Unbabel, which last year won millions of GPU training hours on EU supercomputers.

EuroLLM shares similar goals to its near-namesake: “To build an open source European Large Language Model that supports 24 Official European Languages, and a few other strategically important languages.”

Andre Martins, head of research at Unbabel, took to social media to highlight these similarities, noting that OpenEuroLLM is appropriating a name that already exists. “I hope the different communities collaborate openly, share their expertise, and don’t decide to reinvent the wheel every time a new project gets funded,” Martins wrote.

Hajič called the situation “unfortunate,” adding that he hoped they might be able to cooperate, though he stressed that due to the source of its funding in the EU, OpenEuroLLM is restricted in terms of its collaborations with non-EU entities, including U.K. universities.

The arrival of China’s DeepSeek, and the cost-to-performance ratio it promises, has given some encouragement that AI initiatives might be able to do far more with much less than initially thought. However, over the past few weeks, many have questioned the true costs involved in building DeepSeek.

“With respect to DeepSeek, we actually know very little about what exactly went into building it,” Peter Sarlin, who is technical co-lead on the OpenEuroLLM project, told TechCrunch.

Regardless, Sarlin reckons OpenEuroLLM will have access to sufficient funding, as it’s mostly to cover people. Indeed, a large chunk of the costs of building AI systems is compute, and that should mostly be covered through its partnership with the EuroHPC centers.

“You could say that OpenEuroLLM actually has quite a significant budget,” Sarlin said. “EuroHPC has invested billions in AI and compute infrastructure, and have committed billions more into expanding that in the coming few years.”

It’s also worth noting that the OpenEuroLLM project isn’t building toward a consumer- or enterprise-grade product. It’s purely about the models, and this is why Sarlin reckons the budget it has should be ample.

“The intent here isn’t to build a chatbot or an AI assistant — that would be a product initiative requiring a lot of effort, and that’s what ChatGPT did so well,” Sarlin said. “What we’re contributing is an open source foundation model that functions as the AI infrastructure for companies in Europe to build upon. We know what it takes to build models, it’s not something you need billions for.”

Since 2017, Sarlin has spearheaded AI lab Silo AI, which launched — in partnership with others, including the HPLT project — the family of Poro and Viking open models. These already support a handful of European languages, but the company is now readying the next iteration “Europa” models, which will cover all European languages.

And this ties in with the whole “not starting from scratch” notion espoused by Hajič — there is already a bedrock of expertise and technology in place.

As critics have noted, OpenEuroLLM does have a lot of moving parts — which Hajič acknowledges, albeit with a positive outlook.

“I’ve been involved in many collaborative projects, and I believe it has its advantages versus a single company,” he said. “Of course they’ve done great things at the likes of OpenAI to Mistral, but I hope that the combination of academic expertise and the companies’ focus could bring something new.”

And in many ways, it’s not about trying to outmaneuver Big Tech or billion-dollar AI startups; the ultimate goal is digital sovereignty: (mostly) open foundation LLMs built by, and for, Europe.

“I hope this won’t be the case, but if, in the end, we are not the number one model, and we have a ‘good’ model, then we will still have a model with all the components based in Europe,” Hajič said. “This will be a positive result.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenEuroLLM 开源LLM 数字主权 欧盟AI
相关文章