News

Dagens Industri OP: The Prime Minister is Wrong About AI Adoption in Sweden

June 30, 2025
Dagens Industri OP: The Prime Minister is Wrong About AI Adoption in Sweden

Original: https://www.di.se/debatt/statsministern-har-fel-om-ai-adoption-i-sverige/

Pausing the implementation of the AI Act will not encourage Swedish or European organizations to adopt AI more widely. Quite the opposite. Regulations and frameworks create clarity. The reason AI is not delivering the expected returns on investment or promised productivity gains is:

  1. Uncertainty about the risk landscape. The rapid pace of AI development has been accompanied by rising cases of discrimination, privacy violations, manipulation, and the spread of misinformation. This makes companies hesitant to adopt AI.
  2. Lack of oversight and governance. From a leadership perspective, the ability to monitor both returns and risks is missing. On top of that, technical tools for risk assessment are fragmented, and validation and controls are often absent.

The Prime Minister argues that he wants to pause the AI Act because it is too vague and will hinder innovation. This is incorrect. EU member states have already reached agreement. We cannot wait.

The first draft of the AI regulation was published back in April 2021. Every EU member state has had time to designate an authority and establish regulatory sandboxes. Sweden is falling behind and will be late in appointing a responsible authority (see article).

Uncertainty in enforcement is no argument for delaying the AI Act. The requirement that “high-risk” AI systems involve human oversight and risk assessment—based on the severity and likelihood of harm to the end user—has been clear since 2021. The government has had ample time to prepare an authority to support both public and private Swedish companies. Since 2018, my company (initially supported by Vinnova funding for interdisciplinary research with KTH) has worked with major public and private organizations on managing AI risks from a cross-functional perspective.

As for the Prime Minister’s claim that AI regulation stifles innovation, it is already known that the EU is considering relief for small companies and entrepreneurs. But medium and large companies need clear guidelines and support to implement AI that people can trust.

In my view, without the guardrails provided by the AI Act, there are three very poor scenarios:

  1. Fewer AI solutions reach production because companies hesitate in the face of an unclear risk landscape and lack of risk assessment capabilities.
  2. Missed productivity gains as companies fail to implement even low- or medium-risk AI solutions, due to uncertainty about how their systems should be categorized.
  3. Unsafe AI systems reach the market without proper risk validation, leading to privacy violations, discrimination, or the spread of misinformation.

In the third scenario, companies risk losing the trust and respect of their customers. This is why we need regulation—to ensure we can trust this powerful technology. What is the alternative? Even today, organizations are unintentionally violating the fundamental rights of European citizens. Would you let your child go to a school with a teacher operating without rules or boundaries? Would you board an airplane without controls and safety standards?

Anna Felländer, Co-founder of Asenion, President of Asenion’s European Office

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help