Data Protection lawyers with 50+ years of experience

Free initial consultation
/insights

Updated Wednesday, September 11, 2024

Updated Wednesday, September 11, 2024

The AI Act is here - The European approach to regulating artificial intelligence

On August 1, 2024, the "Artificial Intelligence Act" ("AI Act") came into force after a long legislative struggle. This article will take a closer look at the requirements that deployers and providers of AI systems must now comply with.

Boris Arendt

Salary Partner (Attorney-at-law)

Leon Neumann

Scientific Research Assistant

Risk-based approach
Definition of AI-Systems
Providers vs. deployers
Prohibited AI practices
High-risk AI systems
AI systems with low risk
AI literacy as a core requirement (Art. 4 AI Act)
Special case: General Purpose AI
Who monitors compliance with the regulation?
Outlook

Get assistance from our lawyers

Data Protection compliance can be complicated. Let our experienced team simplify it for you.

Free initial consultation

The EU Commission had already developed a strategy for regulating and simultaneously promoting the development of artificial intelligence in 2018. The Commission then presented an initial proposal for the “Artificial Intelligence Act” (“AI Act”) in April 2021. The AI Act entered into force on August 1, 2024.

This concludes the legislative process three years after the first draft was presented. The final version contains numerous changes compared to the initial draft, some of which are intended to take account of new technical developments such as the chatbot ‘ChatGPT’ by OpenAI.

The overall aim of the regulation is to create a legal framework to ensure sufficient protection of fundamental rights and to create a basis of trust when dealing with artificial intelligence. At the same time, however, the EU should remain a place that promotes innovation, where the administrative and financial burden of dealing with artificial intelligence is made easier, especially for small and medium-sized enterprises (SMEs). With this in mind, the AI Act is supposed to place clear requirements and obligations on providers and users with regard to specific applications of AI.

Companies that use or offer AI are therefore urged to deal with the regulation now at the latest and start implementing the requirements. However, as was foreseeable at an early stage, the regulation does not affect all commercial players who deal with AI. This is because the majority of AI systems are not subject to the strict requirements that the regulation places on high-risk AI systems. Most players therefore only have to fulfill transparency obligations or are not subject to any specific requirements.

This article is intended to provide you with a summary of whether and which requirements under the AI Act must be observed if you work with artificial intelligence in your business.


Risk-based approach

This initially depends on how the AI system concerned is classified by the regulation. The AI Regulation follows a risk-based approach, according to which AI systems are divided into four different risk groups based on their area of application and purpose. Accordingly, there are AI systems with unacceptable, high, limited and minimal risk. The greater the risk posed by the AI system, the more extensive the obligations to be complied with. Thus, in order to maintain an innovation-friendly legal framework, the regulator attempts to limit regulation to what is necessary.


Definition of AI-Systems

The AI Regulation only applies to the use of ‘AI systems’. Traditional software is therefore not covered by the AI Regulation. (see Recital 12).

AI systems are machine-based systems that are designed for varying degrees of autonomous operation. These systems can adapt after commissioning and derive from the inputs received how certain outputs should be (cf. Art. 3 No. 1 AI Regulation).

AI systems are therefore distinguished from simple data processing in particular by their ability to deduce. AI systems enable learning, reasoning and modelling processes based on machine learning.


Providers vs. deployers

The regulation basically addresses providers of AI systems if they place the system on the market or put it into operation in the EU, as well as deployers, with the exception of purely private use. Accordingly, the specific requirements to be implemented vary not only according to the type or risk of the system, but also according to your role in relation to the system.

Providers: The majority of the legal requirements are placed on the providers of AI systems. According to the legal definition in Art. 3 para. 1 no. 2 AI Act, a "provider" is a natural or legal person who develops an AI system or has one developed with a view to placing it on the market or putting it into service in his own name or under his own brand. It is irrelevant whether the system is offered for a fee or free of charge.

Deployers: Deployers, on the other hand, are defined in Art. 3 para. 1 no. 4 AI Act as natural or legal persons who use an AI system under their own responsibility. However, purely private use is excluded, meaning that private actors are not subject to the regulation.

In both cases, the regulation expressly clarifies that authorities and other public bodies are also covered by the terms.


Prohibited AI practices

The AI Act initially prohibits AI practices that it deems to pose an unacceptable risk. The ban applies not only to the providers, but also to the users of such practices. The regulation specifically lists which practices are prohibited, focusing on those that are particularly invasive of fundamental rights. These include, for example, practices for subconsciously influencing the behavior of natural persons or social scoring systems.

However, prohibited AI practices should rarely occur in everyday business. High-risk systems are far more relevant and the central subject of the regulation.


High-risk AI systems

Most of the requirements of the AI Act relate to high-risk systems. In order to check which specific requirements you have to meet, you must first determine whether your system is classified as high-risk by the regulation.

1. Classification as a high-risk system

Classification can certainly cause difficulties in individual cases, as the legislator has designed a complex classification system instead of a clear definition.

High-risk systems include, on the one hand, AI systems that are used as a safety component of a product that is the subject of the EU legal acts listed in Annex II or are themselves such a product. The legal acts listed in Annex II concern, for example, machinery, toys or medical devices.

On the other hand, AI systems are considered high-risk if their scope of application is listed in Annex III of the regulation. In summary, these include:

  • AI systems in critical infrastructures, such as transportation and energy supply, which pose security risks;
  • AI systems in education, vocational training, employment and human resource management that have a significant impact on people's lives;
  • and AI systems in essential public services and in the criminal justice system, where they could affect fundamental rights.

It should be noted that the final version of the regulation has been given a new exemption following adjustments in the last legislative phase. According to this, despite being classified in Annex III, a system is not considered high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons and does not materially influence decision-making processes. Ultimately, this excludes AI systems that only carry out subordinate auxiliary activities.

Whether the exemption applies in a specific case can be assessed by the provider of the system itself, whereby the reasons for the decision must be documented.

To avoid the risk of incorrect assessment by the provider, users should always check for themselves whether the system falls under Annex III and, in case of doubt, ask the provider for the documented justification if the provider considers the system to be exempt from the regulations.

In conclusion, the use of AI systems in the following areas is considered high-risk:

  • Critical infrastructures (e.g. transportation) that could endanger the lives and health of citizens
  • Education or training that can determine a person's access to education and career progression (e.g. exam scores)
  • Safety components of products (e.g. AI application in robot-assisted surgery)
  • Employment, management of employees and access to self-employment (e.g. CV-sorting software for recruitment processes)
  • Essential private and public services (e.g. credit scoring that denies citizens the opportunity to obtain credit)
  • Criminal prosecution that can interfere with people's fundamental rights (e.g. assessing the reliability of evidence)
  • Migration, asylum and border control management (e.g. automated checking of visa applications)
  • Administration of justice and democratic processes (e.g. AI solutions for searching for court judgments)

If the AI system in question is considered high-risk according to the above, then a large number of requirements must be met, whereby a distinction must be made as to whether you are a provider or a user of the system.

2. Obligations for providers of high-risk AI systems (Art. 16 AI Act)

If it is determined that the AI system is high-risk, it must first undergo the conformity assessment procedure and comply with the requirements listed below. In a second step, it must then be registered in the EU database. In the final step, the conformity marking must be issued and affixed. If substantial changes are subsequently made to the system, the process must be repeated.

The duties are as follows:

  • Establishment of a risk management system: The system must continuously identify, analyze and evaluate the risks associated with AI. The identified risks must then be countered with suitable measures.
  • Data governance: It must be ensured that the AI system uses high-quality training data whose data sets are relevant and unbiased.
  • Technical documentation: The documentation must show how the system fulfills the requirements of the regulation. Annex IV of the AI Act lists the minimum information that must be included.
  • Logging ("logs"): The provider must ensure that the system automatically logs events and processes, or at least makes this possible for the user, so that the key processes within the system are traceable. These so-called "logs" must be stored by the provider.
  • Transparency: The system must be designed with sufficient transparency and provided with (digital) instructions for use so that users can operate the system correctly and comply with the provisions of the regulation themselves.
  • Human supervision: This serves to prevent or minimize risks. The provider of the system can either integrate human supervision into the AI system before it is placed on the market or make it technically possible for the user of the system to take over supervision themselves.
  • Accuracy, robustness and cyber security: The system must have an appropriate level of accuracy, robustness and cyber security throughout its life cycle.
  • Establishment of a quality management system: The system should ensure compliance with the regulations and be documented in writing and cover, among other things, procedures for risk management, post-market surveillance and incident reporting.
  • Corrective measures: After the system has been placed on the market or put into service, corrective action must be taken if necessary if it is suspected that the system does not comply with the requirements of the regulation.
  • Provision of information: In the event of requests from authorities and audits, authorities must be provided with all necessary information to demonstrate compliance with the requirements of the regulation.
  • Appointment of an EU representative: Providers not established in the EU must appoint a representative established in the EU in writing in accordance with Art. 22 AI Act before introducing the system to the EU market.
  • Conformity assessment procedure: In accordance with Art. 43 AI Act, the system must undergo the relevant conformity assessment procedure before it is placed on the market or put into operation. The provider must also issue an EU declaration of conformity for the AI system in accordance with Art. 47 AI Act and keep it for 10 years after commissioning or placing on the market.
  • CE conformity marking: The CE conformity marking in accordance with Art. 30 of EU Regulation EC 765/2008 must be affixed to the AI system (digitally) to indicate conformity with the AI Act.
  • Registration in the EU database: Before the system is placed on the market or put into operation, it must be registered in the EU database referred to in Art. 71 AI Act.
  • Post-market monitoring: This actively and systematically collects, documents and analyzes the relevant data provided by users or from other sources on the performance of the AI system over its entire service life.
3. Obligations for deployers of high-risk AI systems (Art. 26 AI Act)
  • Technical and organizational measures: Deployers must first take TOMs to ensure that the AI system is used in accordance with the instructions for use and the other requirements of Art. 26 para. 2-5 AI Act.
  • Human supervision: In the event that the provider has transferred the role of human supervision to the user, this task must be performed by a competent person who is sufficiently qualified and supported for this purpose.
  • Quality of the input data: Insofar as the input data is under the deployer’s control, he or she shall ensure that it corresponds to the intended purpose of the AI system.
  • Duty of care: The deployer must use the system in accordance with the instructions for use and provide the manufacturer with the necessary information for post-market monitoring. If it can be assumed that operation will lead to a disproportionate risk to health, safety or fundamental rights, the system must be taken out of operation.
  • Reporting obligations: In the event of decommissioning or other serious incidents, there are various reporting obligations towards the provider or manufacturer of the system.
  • Retention of the logs: The logs automatically generated by the system must be kept by the deployer for at least 6 months in order to be able to prove the proper functioning of the system or to enable subsequent checks.
  • Declaration to employees: Employers must inform their employees that they are subject to artificial intelligence if they use it in the workplace.
  • Duty to provide information: Special information obligations apply if the system makes decisions about natural persons or is used to support such decisions.
  • Fundamental Rights Impact Assessment: In contrast to the original proposal by the EU Parliament, a Fundamental Rights Impact Assessment is now only to be carried out by state users and private users who perform state tasks. However, there are exceptions for creditworthiness checks or the pricing of life and health insurance policies.

It should not be overlooked that the provider obligations under Art. 16 AI Act may also apply to deployers in certain cases. This is particularly the case if the deployer

  • places a high-risk AI system on the market or puts it into operation under its own name or brand,
  • makes a significant change to an AI system classified as high-risk without it losing its status as a high-risk AI system, or
  • makes a significant change to the intended purpose of another AI system , thereby turning it into a high-risk AI system.

In these cases, the deployer is deemed to be the new provider of the AI system in accordance with Art. 25 para. 1, 2 AI Act, while the old provider is relieved of responsibility. However, the old provider must support the new provider in fulfilling its obligations.


AI systems with low risk

The AI Act does not explicitly name the third category of AI systems. Rather, the regulation implies this category by imposing certain transparency requirements on AI systems with particular manipulation risks - regardless of whether the system is classified as high-risk. As these transparency obligations can therefore also apply to AI systems that are not classified as high-risk, we can speak of a “third category” for AI systems, as these only pose a “low risk” (a risk of manipulation).

From the regulator's perspective, such risks exist when artificial intelligence generates content or comes into direct contact with natural persons. This is the case with chatbots such as ChatGPT. Special requirements therefore apply to the providers and operators of such systems.

However, it should not be overlooked that these requirements may also apply to high-risk AI systems and therefore apply additionaly. AI systems that are not prohibited, not high-risk and also not subject to the transparency obligations (or the provisions on general purpose AI, see below) are not subject to the regulation and therefore form the fourth category (“minimal risk”).

1. Obligations for providers of certain AI systems (Art. 50 para. 1, 2 AI Act)
  • AI systems that are intended to interact directly with natural persons must be designed or conceived by the provider in such a way that the natural person is informed that they are interacting with an AI system. By way of exception, this does not apply if this fact is readily apparent to a reasonable person.
  • AI systems that generate audio, image, video or text material must be marked as generated or manipulated in a machine-readable format. The solution chosen by the provider to implement these requirements must also be effective, interoperable, robust, reliable and state of the art.

In both cases, the information must be made available to the natural persons concerned in a clear and recognizable manner at the latest at the time of the first interaction or exposure to the AI system.

2. Obligations for deployers of certain AI systems (Art. 50 para. 3, 4 AI Act)
  • Data subjects must be informed by the user of an emotion recognition system or a system for biometric categorization about the operation of this system.
  • Users of AI systems that create or modify images, videos or audio content must disclose that the generated content is not genuine. However, there are exceptions in the area of artistic freedom and satire.
  • Finally, users of AI systems who create or modify a text that is published for public information purposes must also disclose this in principle.

As in the case of providers, this information must be made available to the natural persons concerned in a clear and recognizable manner at the latest at the time of the first interaction or exposure.


AI literacy as a core requirement (Art. 4 AI Act)

Irrespective of the classification of AI, the regulation also stipulates that users of AI systems should generally have so-called "AI literacy". Accordingly, users must ensure that their own staff and other persons involved in the operation and use of AI systems on their behalf have sufficient AI literacy. The existing experience and knowledge of the persons concerned and the context in which the AI system is to be used should be taken into account. The scope of the required competence depends on the risk potential of the AI system and the associated obligations.


Special case: General Purpose AI

The subject of subsequent amendments during the legislative process was so-called "general purpose AI" (GPAI), formerly referred to as "foundation models". These are models of AI systems that have been trained on a broad database, are designed for general use and can be adapted for a wide range of different tasks. Classic examples include GPT-4o from OpenAI and other large language models.

It is worth noting that the legislator is only imposing additional requirements on the providers of GPAI. In addition - i.e. regardless of whether the AI is also classified as general purpose AI - all other requirements must still be observed, in particular those for high-risk AI systems. Deployers of GPAI are therefore not subject to any additional obligations, but they may have to comply with the requirements outlined above.

The obligations for GPAI providers are listed in Art. 53 AI Act and are briefly described here:

  • Technical documentation: This should be created and continuously updated and must include the training and testing procedures of the AI system as well as its evaluation results.
  • Provide information and documentation on how the AI model works: This should enable downstream providers who intend to integrate the GPAI model into their own AI system to understand the capabilities and limitations of the system and meet the requirements placed on them.
  • Creation of a policy for compliance with the copyright directive.
  • Publication of a sufficiently detailed summary of the content used for the training of the GPAI model.

In Art. 3 no. 63-66 AI Act, the legislator also defines GPAI models with systemic risk and imposes the following additional requirements on them in Art. 55 AI Act:

  • Carrying out model evaluations: This also involves conducting and documenting adversarial tests to identify and mitigate systemic risks.
  • Assessment and mitigation of potential systematic risks and their sources.
  • Documentation and reporting of serious incidents and related corrective actions.
  • Ensuring an appropriate level of cybersecurity.

Who monitors compliance with the regulation?

Supervision of compliance with the regulation is divided into sectors. Part of the responsibility therefore lies with the EU Commission, which set up a corresponding office in February 2024, the ‘European Office for Artificial Intelligence’. This office is mainly responsible for the supervision of AI systems with a general purpose.

However, the enforcement tasks are essentially the responsibility of the member state supervisory authorities. These are responsible in particular for the core task of market surveillance and the accreditation of conformity assessment bodies, which in turn check whether AI systems fulfil the requirements of the regulation. By August 2, 2025, the Member States must each designate at least one notifying authority and one market surveillance authority as the competent national authority and set up a ‘single point of contact’ to act as a central point of contact for complaints under the AI Act. This does not necessarily require the establishment of new authorities, but existing authorities can also be assigned the corresponding supervisory function.

For individual sectors, the national allocation of competences is already specified by the AI Act. For example, for high-risk AI systems that are used in products that are already subject to special EU product regulations, the relevant national market surveillance authorities should also be responsible for compliance with the AI Act. In the areas of law enforcement, migration, asylum or border control, as well as in the administration of justice, the data protection supervisory authorities are designated as market surveillance authorities.

However, the majority of responsibilities are left to the member states to determine, such as those for AI systems for remote biometric identification or for high-risk AI systems in the areas of critical infrastructure, education and labour management.

In Germany, the distribution of sectoral responsibilities not provided for in the AI Act is still unclear. In a resolution dated May 3, 2024, the Data Protection Conference (DSK) proposed that the data protection authorities should not only assume the responsibilities provided for by the AI Act, but should also act as a general market surveillance authority for the AI Act. In a statement dated July 16, 2024, the European Data Protection Board (EDPB) also spoke out in favour of the data protection authorities being responsible.

In contrast, a study by the Bertelsmann Foundation from May 2024 recommends the Federal Network Agency (BNetzA) as the market surveillance authority, as it could easily be developed into a more comprehensive digital authority. According to recent research by the Tagesspiegel, the ministries in charge are probably also leaning towards this solution.

The federal government has not yet publicly commented on how the competences should be distributed, but has merely announced that it will submit a draft implementing law to the Bundestag in which the responsibilities will be regulated. A draft bill is planned by mid-October 2024.


Outlook

As stated at the beginning, the AI Act came into force on August 1, 2024. However, it will only be fully applicable after two years. Due to the widely varying implementation effort with regard to individual obligations, the provisions pursuant to Art. 113 AI Act will enter into force in phases. For example, the prohibitions on certain AI practices only apply after six months, while the governance rules and obligations for GPAI only have to be complied with after twelve months. The provisions for AI systems embedded in regulated products only come into force after 36 months.

Here is an overview of the relevant data:

  • February 2, 2025 (6 months after entry into force): Prohibited AI systems must be switched off and the general regulations must be implemented
  • August 2, 2025 (12 months after entry into force): Obligations for GPAI systems must be complied with (with the exception of Art. 101 AI Act); the sanction mechanism of the AI Act applies and is implemented
  • August 2, 2026 (24 months after entry into force): All provisions of the AI Act become applicable, in particular those for high-risk AI systems according to Annex III (with the exception of those according to Annex II)
  • August 2, 2027 (36 months after entry into force): Obligations for high-risk AI systems in accordance with Annex II must be complied with
  • August 2, 2030 (72 months after entry into force): Applicable to high-risk AI systems intended for use by public authorities
  • December 31, 2030: Applicable to AI systems that are components of the large-scale IT systems established by legal acts listed in an annex to AI Act and placed on the market or put into service before August 2, 2027

The Commission's guidelines and implementing acts, which are to be adopted in the next six months, will be of particular importance in implementing the legal requirements. As a result, the Commission will have a significant influence on how the regulation is understood and applied.

For example, the definition of the term "artificial intelligence" remains unclear. While the legal definition is understood narrowly in some cases, meaning that only AI systems that are still in operation and outside the direct control of the provider are covered, others read the definition in such a way that the majority of IT systems that have to cope with more complex tasks are covered. It therefore remains to be seen until the Commission publishes its guidelines in this regard.

A list of helpful statements and guidance from supervisory authorities can be found here:

Legal advice

Simpliant Legal - Wittig, Bressner, Groß Rechtsanwälte Partnerschaftsgesellschaft mbB

Consulting

Simpliant GmbH

Technology

Simpliant Technologies GmbH

Data protection

We will support you in implementing all data protection requirements with the GDPR.

Information security

We support you in setting up a holistic ISMS such as ISO 27001.

Artificial intelligence

We advise you on the integration of AI and develop legally compliant usage concepts.


© 2019 - 2024 Simpliant