

Europe likes to talk about “strategic autonomy”. Yet on artificial intelligence, the EU is quietly sleepwalking into digital dependency. A handful of US and Chinese technology giants now control the large language models that increasingly mediate what Europeans read, see and believe. When a few unaccountable actors own the infrastructure of meaning, this is not just a market failure – it is a democratic crisis.
News media have long carried legal and moral obligations to serve the public interest. Generative AI now produces and curates content at a far greater scale, without any equivalent duty of care. We still cannot reliably distinguish between human and AI-generated material, while today’s foundation models amplify bias, trample IP and privacy, and optimise solely for engagement. The result is a knowledge infrastructure that is structurally misaligned with European law, values and democratic norms.
Brussels has already chosen one answer: regulate. The AI Act sets important horizontal rules and creates a new supervisory architecture for powerful models. But regulation without capability is a recipe for dependence. Europe will not negotiate from a position of strength if the critical models, data pipelines and chips are all designed and controlled elsewhere. Without sovereign AI capacity, the EU risks becoming a “ruletaker” that writes guidelines for systems it neither builds nor truly understands.
Europe needs to build AI sovereignty.
There is, however, a sharper and more realistic path than trying to clone the biggest US frontier models: a strategic bet on small, open-source, high-performance models that Europe can actually own, govern and export. There is a huge underserved market for models that are:
- small enough to run on local or edge infrastructure
- open and inspectable
- tailored to specific languages, sectors and public-interest tasks
These models are cheaper, faster, more energy efficient and far better suited to Europe’s decentralised economy of SMEs, hospitals, schools and public administrations than hyperscale black boxes. Crucially, they can be designed from the ground up to embed liberal democratic values – not as a compliance afterthought, but as a training objective.
The irony is that Europe is already pioneering this future, just not at the scale required. Initiatives such as emerging multilingual open models and scattered research consortia show that it is technically feasible to train competitive small models on high-quality, domain-specific datasets. They unlock innovation in robotics, process automation and consumer-facing services, while enabling onpremise deployments that actually respect data protection and security requirements. The problem is not vision – it is fragmentation, underfunding and lack of political urgency.
What would a serious AI sovereignty agenda look like?
First, a central, EU-level funding vehicle dedicated to open small models, with long-term mandates rather than short project cycles.
Second, shared compute, data and algorithmic infrastructure that universities, SMEs and public bodies can use without begging for API access from foreign platforms.
Third, governance-by-design: practical tooling for AI Act compliance, risk management and auditability built directly into the model lifecycle, not added as a legal fig leaf at the end.
Finally, a coalition of likeminded democracies around open, humancentred AI – a counterweight to both surveillanceauthoritarian and hyper-extractive AI models. The choice facing Europe is brutally simple. Either the EU accepts the role of a digital consumer economy, renting cognitive infrastructure and importing the value chains and norms that come with it. Or it invests, at scale, in its own AI capabilities – starting with small, open models that both protect the public interest and catalyse innovation on European terms. Talking about “sovereignty” without building this capacity is not strategic autonomy. It is strategic denial.
Anna Felländer, co-founder Asenion, President Asenion Europe, co-founder and Board Member Research Institute for Sustainable AI
Fredrik Heinzt, Professor of Computer Science at Linköping University, Program Director WASP-ED, Co-Director WASP, Coordinator TrustLLM, Professor Chair Sweden - Brazil on AI and Autonomous Systems