The EU AI Act was passed in March 2024 and will be gradually implemented until 2030. Its stated goal is actually to create better conditions for the development and use of AI technologies. However, as we have seen in previous analyses, the noble goals of the law were partly watered down by massive lobbying from US tech giants – and the burden of regulation was cleverly shifted to the application layer, and thus onto the B2B sector.
What does this mean specifically for your company? What obligations arise for you as a user and provider of AI solutions? Let's take a look at the mechanisms of the law, the risk categories, and the (recently significantly adjusted) timeline.
What exactly is an AI system?
Before we talk about risks, we need to clarify the terminology. According to Article 3, an AI system is a machine-based system designed to operate with varying levels of autonomy. It infers from the input it receives how to generate outputs – such as predictions, content, recommendations, or decisions – that can influence our physical or virtual environments.
This official definition is deliberately very broad. While this makes it easier for legislators to regulate future, yet-unknown technologies, it also means that you need to check very carefully which of your software tools actually fall under the scope of this law.
The law clearly distinguishes between two actors who bear responsibility:
- The provider is the person or institution that develops an AI (or has it developed) and places it on the market.
- The deployer (or operator) is the person or institution using an AI system under their own responsibility in a professional context.
Only personal, non-commercial activities are exempt from regulation. Non-compliance is not a trivial offense: a lax approach to the law can have drastic consequences. For prohibited systems, fines of up to 35 million euros or 7% of the total worldwide annual turnover can be imposed. For violations in the high-risk sector, fines of up to 15 million euros or 3% apply.
The Risk Categories: From Prohibitions to Pure Transparency
The AI Act introduces various rules that classify AI solutions into four categories based on their potential risks.
1. Prohibited Practices (Unacceptable Risk)
The highest category prohibits AI systems that pose a threat to people or fundamental rights.
- What falls under this? Systems that deceive people through manipulation, exploit vulnerable groups, or engage in "social scoring." Unlimited biometric facial recognition in video surveillance systems is also prohibited.
- Your To-Do: Such systems may not be used in the EU under any circumstances.
2. High-Risk AI Systems
These systems are not prohibited, but they place the highest demands on providers and deployers. They include AI that negatively affects safety or fundamental rights.
- What falls under this? AI in HR management, education, critical infrastructure, or administrative procedures. Systems in products that fall under EU harmonization legislation, such as mechanical engineering or medical devices, are also affected. In principle, an application from these areas is considered high-risk unless it only performs simple procedural auxiliary tasks.
- Your To-Do: Providers must establish a continuous risk management system that spans the entire lifecycle of the AI. Data must meet quality criteria to counteract discrimination (bias). Extensive technical documentation, logging of events, and effective human oversight are required. These individuals must be able to override or reverse AI decisions. However, SMEs are allowed to provide the documentation in a simplified manner.
3. Limited Risk AI Systems
This is the category for most everyday applications. It concerns offerings that carry a risk of manipulation or deception for humans.
- What falls under this? For example, chatbots, image generators, or recommendation algorithms.
- Your To-Do: Transparency obligations apply here. If a user is interacting with an AI, they must be explicitly informed of this. If systems generate media such as images or texts, it must be clearly evident that they were artificially created.
4. Minimal Risk AI Systems
All systems that do not fall into the above categories. There are no special obligations for them.
General-Purpose AI Models
General-purpose AI models (GPAI) represent a special case. This refers to foundational models like those from OpenAI, which are trained on massive amounts of data. Providers of these models must submit comprehensive technical documentation so that you, as a user, can understand the limits of the models.
The New Implementation Timeline (As of Spring 2026)
In theory, the requirements for high-risk AI sound overwhelming – and in some places, they are. Politicians have recently responded to criticism and adjusted the implementation timeline. Because technical standards were lacking and the bureaucratic burden for SMEs became too great, the EU agreed on a crucial postponement as part of the "Digital Omnibus" package in March 2026.
This is what the current, adjusted timeline looks like for you:
- August 2024: The AI Act officially entered into force.
- February 2025: Prohibited AI systems are illegal. In addition, the obligation for so-called AI literacy has been in effect since then: companies must ensure that their employees possess sufficient AI skills.
- August 2025: Transparency obligations and governance rules take effect. The requirements for general-purpose AI models (GPAI) also become binding.
- December 2027 (The big shift!): Originally, the strict obligations for high-risk AI systems (Annex III, like HR or education) were supposed to start in August 2026. This deadline has now been postponed by 16 months to December 2027. This gives European B2B companies desperately needed time for certifications and internal audit processes.
- August 2028: Application of high-risk rules for systems that fall under existing product regulations (Annex I, e.g., mechanical engineering).
Q&A: The Most Pressing Practical Questions
Does the EU AI Act also apply to small and medium-sized enterprises (SMEs)?
Yes. The law does not differentiate by company size in its basic applicability. The good news, however, is that simplifications are provided for startups and SMEs – for example, technical documentation for high-risk systems can be provided in a simplified form.
When do I qualify as an SME and benefit from simplified documentation obligations?
Since the EU AI Act is a European regulation, the official SME definition of the EU Commission applies here. Your company counts as an SME (small or medium-sized enterprise) if you employ fewer than 250 people. In addition, either your annual turnover must be a maximum of 50 million euros or your annual balance sheet total must not exceed 43 million euros. For small enterprises (fewer than 50 employees) or micro-enterprises and startups (fewer than 10 employees), these simplifications apply even more.
Do I need to act right now if high-risk obligations don't apply until late 2027?
Definitely. The obligations to ensure AI literacy and the prohibition of unacceptable risks are already active. Starting in August 2025, strict transparency obligations will also apply (e.g., labeling of chatbots and AI content). The first and most important step for you now is an internal AI inventory: Which tools do you use, and what risk category do they fall into?
Does the law prevent European innovation?
Critics argue that the regulations hinder innovation and are difficult for small companies to fulfill. Proponents, on the other hand, emphasize that negative impacts are limited and safe innovations are enabled. It is true that the AI Act sets standards for trustworthy AI - these establish baselines, especially in the European B2B business, whose legally compliant implementation represents a competitive advantage. Unfortunately, the operational requirements have been designed in many places in such a way that they are often difficult to implement, especially for small companies.
What other regulations are important for my AI application?
The AI Act does not exist in a vacuum. If your product already falls under existing EU harmonization legislation (such as toy safety, mechanical engineering, or medical devices), it is automatically classified in the high-risk AI category. The implementation of the obligations from the EU AI Act is closely intertwined with the certification mechanisms of this harmonization legislation in this case. Furthermore, the seven principles of the GDPR (such as data minimization and purpose limitation), the Digital Services Act (DSA) for algorithmic transparency on online platforms, and the Digital Markets Act (DMA), which regulates the monopoly position of the giant "gatekeepers", naturally continue to apply to data processing.
What are the most important steps if I bring a high-risk application to the market?
If you develop such a solution and act as a provider, the catalog of requirements is enormous. Among other things, you must apply strict principles of data governance during AI training (the data must be error-free and representative). In addition, the AI must exhibit an appropriate level of accuracy, robustness, and cybersecurity to be resilient against errors or malicious attacks. Equally important is absolute transparency towards your customers (the deployers) so that they can use the system safely. Please keep in mind: This is only a brief excerpt of the obligations.
How do I need to adapt simple AI applications like chatbots?
Everyday applications like chatbots, image generators, or recommendation algorithms fall under "AI systems with limited risks." The risk here primarily consists of the potential deception or manipulation of people. For this reason, transparency obligations apply first and foremost. The regulation is refreshingly pragmatic here: If users are chatting with an AI, they must be explicitly informed that they are interacting with an AI and not a human. A clear, unambiguous disclaimer at the beginning of the conversation is sufficient in practice.
What do I need to look out for if I only use a service provider's AI application?
The legal distinction is essential here: The provider is the one who develops the AI and brings it to the market. If you purchase and use this AI for your business purposes, you are merely the deployer. But even as a deployer, you have obligations: You are responsible for ensuring effective human oversight. Your employees must understand the capabilities and limitations of the systems. In an emergency, they must be able to override or reverse AI decisions to prevent risks. If you need training to help your employees better understand and use generative AI more efficiently, find out more about my training offers here.
Who offers me help from the government side?
The implementation of the EU AI Act is organized nationally.
- In Germany, the Federal Network Agency (BNetzA) acts as the central point of contact and market surveillance authority.
- In Austria, the Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR) with its "AI Service Center" takes on this role as the central point of contact for companies.
- In Switzerland, the EU AI Act generally does not apply at first, as it is not an EU member. Attention: However, the AI Act applies extraterritorially. This means that Swiss companies must also comply with it as soon as they offer their AI products or services on the EU market! For purely domestic AI issues, the Federal Office of Communications (OFCOM) and the Federal Office of Justice (FOJ) are currently responsible in Switzerland.
This article provides a practical overview of the legal framework of the EU AI Act and does not claim to be exhaustive. Please note that this strategic assessment cannot replace binding legal advice from specialized lawyers.