Europe has almost lost the race for consumer AI. But not because we are regulating ourselves to death in Brussels, as is so often and gladly claimed. The truth is more uncomfortable: The EU AI Act started with the noble goal of protecting people from the risks of the technology. But this exact project was hijacked.

We overlooked the fact that the new rules are no longer a threat to the end-user business models of the big US tech giants — but rather a strategic shield. Through massive lobbying, the hurdles have been shifted in such a way that they now make life difficult for the very European companies whose strength lies in the B2B sector. Instead of minimizing risks, the law cements anti-innovation barriers. Yet, understanding exactly these mechanisms is the fundamental prerequisite for effectively countering the actual risks of AI.

Application Categories in the EU AI Act

Before we dive into the details, we should understand the criteria by which an AI application is evaluated. The EU AI Act divides AI applications into four categories:

How these categories are defined, and what documentation requirements accompany them, is of such existential importance to many companies that they are willing to dig deep into their pockets to influence their definition.

Influencing Politicians is Worth Hundreds of Millions of Dollars to Corporations

Companies on both sides of the Atlantic spend immense sums to influence politicians in their decision-making. For example, some of the largest US tech corporations — namely Meta, Amazon, Google, Microsoft, Oracle, Qualcomm, Nvidia, and AMD — are spending $92 million USD on lobbying activities in 2025 alone. This includes advising politicians, drafting legislative proposals, and participating in committees. On top of this are additional expenditures to support preferred politicians in elections. Meta alone, for instance, has budgeted $65 million USD for the US midterm elections in the fall of 2026. [1]

European corporations are also investing in corresponding lobbying activities. However, at the European Union level, the large US tech companies dominate. The 10 US digital corporations with the highest lobbying expenditures in the EU spent about three times as much in 2025 as the top 10 European corporations. Meta leads the pack here with €10 million annually, while the entire tech industry accounts for €151 million. Nevertheless, European companies are still investing millions — for example, around €2 million from BMW, more than €3 million from TotalEnergies, and over €3.5 million from Siemens. Although the EU Transparency Register does not break down exactly which topics the lobbying money targets, AI was the dominant issue over the past year. [2]

How Lobbyists Influenced the EU AI Act

Where exactly do these lobbying activities intervene? During the years when the EU AI Act was being drafted by politicians in the EU Parliament, lobbyists from large corporations in particular enjoyed privileged access to decision-makers. As a result, they were heard much more than smaller companies or civil society representatives, allowing them to exert correspondingly more influence. [3]

How this influence was exercised in practice is well-documented using the example of OpenAI. [4] They criticized the fact that foundation models (like GPT-3, or other well-known language models) were originally classified as high-risk. In a letter to the EU, they argued: "By itself, GPT-3 is not a high-risk system [...] but [it] possesses capabilities that can potentially be employed in high risk use cases." In other words: Do not regulate us, the creators of the AI models, but rather the other companies that apply them. This desire is directly reflected in the final version of the EU AI Act.

The thought itself isn't inherently wrong — foundation models are all-purpose tools, and risks change depending on the application. This already shows that the EU AI Act is in many ways better than its reputation, and it gets several things right. However, it is critical to note that calls from tech corporations for more regulation are often paired with a "trust us to self-regulate" attitude. [4]

But European companies also echoed the sentiments of US tech giants. In particular, Mistral AI from France and Aleph Alpha from Germany found a sympathetic ear with their respective governments when they demanded weaker regulation of foundation models under the mantra of "European technological sovereignty." Although the goal is reasonable, these interventions ultimately led to a form of regulation that ironically harms these smaller European companies the most.

Shortly after the EU AI Act was passed with these watered-down rules, Mistral AI announced a multi-million dollar partnership with the US giant Microsoft. One doesn't have to accuse the startup of strategic deception here, but the historical irony is glaring: The intervention, carried out under the guise of European technological sovereignty, ultimately made the flagship startup an even more attractive partner for the US monopoly. To what extent this strengthens Europe's independence remains questionable.

Watered-Down Measures Disadvantage Small and Medium-Sized Companies

Two aspects in particular heavily favor the large corporations — and they are not immediately obvious. They consolidate the power of consumer-focused tech giants (with B2C business models) while simultaneously blocking those European companies that urgently want to build trustworthy, industry-specific solutions in the corporate client business (B2B).

For instance, it sounds like a sensible measure that only foundation models with a systemic risk are subject to extensive documentation requirements. But what creates a systemic risk? The primary benchmark is the threshold that models trained with more than 10^25 floating-point operations (FLOPs) are considered risky. [5] This value roughly corresponds to the training effort of GPT-4, and thus costs in the double-digit millions.

This criterion doesn't just sound abstract and technical — it is also completely detached from how a model is actually used. Naturally, models with fewer FLOPs can also be used for dangerous purposes.

Large tech corporations also enjoy decisive advantages:

And this is exactly where the trap springs shut on the European economy: It is not about Europe desperately needing its own ChatGPT clone for end consumers. It is about European companies needing tailor-made, highly secure, and trustworthy foundation models for their strong B2B business models. However, the threshold of 10^25 FLOPs massively prevents European innovators from building such a sovereign B2B infrastructure. This cements the monopoly of the US corporations.

Obstacles to Free Competition

Even if an ambitious European startup managed to raise the venture capital to challenge OpenAI directly at the global top, it would immediately hit a regulatory wall.

For open-source initiatives, the situation is even more dire: As soon as a model slips into the "systemic risk" category, the law demands adequate protection of the IT infrastructure during training to prevent misuse or far-reaching consequences of malfunctions. Since open-source projects are decentralized by nature, it is virtually impossible for them to utilize such an infrastructure. The threshold of 10^25 FLOPs makes genuine, top-tier open-source models nearly impossible.

Furthermore, many application areas are regulated as high-risk AI, with corresponding burdens for the solution providers. The crucial point here: Through clever lobbying, the massive, expensive compliance obligations and liability risks were successfully pushed down to the application layer — exactly where European value creation takes place.

The tech giants supply the unregulated engine — but the European startups building the B2B car have to pass the bureaucratic safety inspection. This is not only expensive, but extremely dangerous.

European companies are being forced into dependence on US tech corporations, for whom "trust and security" (the fundamental pillars of B2B business) often take a back seat to sheer speed to market. It was precisely this "move fast and break things" mantra that fueled their rise in the first place. Under the guise of "security," the tech giants have hijacked regulation to secure their own dominance. In doing so, however, they are ultimately undermining the development of a secure, trustworthy AI infrastructure for the European economy.

Vague Criteria Lead to Regulation Through the Backdoor

So what does this mean in concrete terms for high-risk AI applications? The EU AI Act demands that AI systems be safe, transparent, and free of bias. How exactly these criteria are evaluated remains open — and therefore malleable. [6]

The final decisions on what these criteria look like will be made later in committees. Committees that are predominantly staffed by representatives of large corporations. Smaller companies and associations simply have neither the time nor the financial resources to play a decisive role in these committees over a period of years. This ultimately leads to regulation through the backdoor — large companies spend years in committee work writing their own regulatory mechanisms. [7] This, once again, leads to cost-intensive regulations that only large corporations can afford.

For startups, however, these easily become insurmountable obstacles. Even the mere threat of heavy fines means that smaller companies have a harder time securing funding from venture capitalists [8] — when the risk of high penalties increases, the investments simply lose their appeal. This phenomenon, where major players rewrite the rules in their favor, is known in economics as "regulatory capture." [9]

How Narratives of Dystopian Risks are Exploited

Knowing about this regulatory capture, we now understand that leading tech CEOs are not necessarily acting out of altruism when they call for regulation. Instead, regulatory authorities are being repurposed to eliminate competition and strengthen the market position of a select few companies.

In this context, we can now better contextualize why leading tech CEOs regularly warn of the dystopian dangers of AI. Just as the EU AI Act was entering the home stretch, leading tech CEOs suddenly began to shrilly warn of the dystopian dangers of their own products. This targeted fear-mongering helped shift the regulatory focus away from current, real-world harms (like copyright infringement) toward hypothetical, systemic risks in the future. It was for these future risks that the very high thresholds (the 10^25 FLOPs) were introduced — thresholds that cement the market today.

These risks stoke fear among politicians, making them more receptive to heavy regulation. The details of these regulations are then drafted in committee work by representatives of the very same companies. This creates barriers to market entry that favor large corporations.

Towards Efficient Regulation

What would be suitable measures to mitigate AI risks without hindering innovation?

What This Means for the Business Models of European Companies

Overall, the EU AI Act gets a lot right in its basic premise. So why does it still have such a poor reputation in the public eye? Because we are measuring its success by the wrong criteria — and in doing so, we are blind to where the real damage is being done.

In everyday life, we experience AI almost exclusively as end consumers (B2C). We chat with ChatGPT or Claude. In this mass market, convenience usually counts for more to us than security. As a result, the companies that rush to market fastest win. When we now complain that Europe plays no role in this B2C race, we falsely conclude: Our regulation is strangling European innovation.

Here, however, we are applying the wrong standards. The strength of the European economy does not lie in building everyday chatbots, but in the B2B environment. Here, industry, SMEs, and public administration rely on absolute security, data sovereignty, and mutual trust. A consistent, values-based EU AI Act could have laid the foundation for a global competitive advantage here — as a worldwide seal of quality for trustworthy AI.

Yet it was exactly these goals that were undermined by lobbying. By watering down the rules for large foundation models from the US and pushing the liability risks onto the application layer, the law is now stifling our own B2B companies of all things. Even giants like Siemens or SAP are now loudly criticizing the bureaucratic weight. For startups, these hurdles are often fatal.

We are forcing the risks and costs of bureaucracy onto our European companies. At the same time, we are making them dependent on US corporations that have been allowed to keep their technologies largely unregulated. This undermines European technological sovereignty exactly where we actually want to build it up to leverage our strengths.

Europe hasn't lost the AI race yet. But to win in the B2B sector, we must recognize that our problem is not too much regulation, but the wrong regulation. Regulation is therefore the right path — provided it strengthens fair competition instead of digital monocultures.


References: