Data Protection lawyers with 50+ years of experience

Free initial consultation
/insights

Updated Thursday, April 11, 2024

Updated Thursday, April 11, 2024

The AI Act is coming - The European approach to regulating artificial intelligence

A summary of the legal requirements for deployers and providers of AI

Boris Arendt

Salary Partner (Attorney-at-law)

Leon Neumann

Scientific Research Assistant

Introduction
Risk-based approach
Provider vs. deployer
Prohibited AI-Practices
High-risk AI systems
Low-risk AI Systems
AI-Literacy as a core requirement for all deployers of AI
Special case: General-Purpose-AI
Outlook

Get assistance from our lawyers

Data Protection compliance can be complicated. Let our experienced team simplify it for you.

Free initial consultation

After a long legislative struggle, the final version of the AI Act was adopted by the European Parliament last month. This article will take a closer look at what deployers and providers of AI systems can now expect.


Introduction

The topic of artificial intelligence has been on the European legislator's agenda for some time. As early as 2018, the EU Commission developed a strategy for regulating and simultaneously promoting the development of artificial intelligence. A first draft of the law on artificial intelligence ("AI Act") was then presented by the Commission in April 2021. The aim was to ensure sufficient protection of fundamental rights by creating a legal framework and to establish a basis of trust when dealing with artificial intelligence. Following a number of substantive amendments, the Member States first unanimously approved the AI Act in the Committee of Permanent Representatives (AStV) on 2 February 2024. The final draft was then adopted by the European Parliament on 13 March 202024 with 523 out of 618 votes (with 49 abstentions).

Although the Council of Ministers still has to formally vote on the draft in April, it has already agreed to approve the final version. The final step is then the publication of the AI Act in the Official Journal of the EU. The regulation will enter into force 20 days after this date, meaning that the AI Act is expected to apply from the end of May.

Companies that use or offer AI are therefore urged to familiarise themselves with the regulation now at the latest and start implementing the requirements. However, as was foreseeable at an early stage, the AI Act does not affect all commercial players who deal with AI. The majority of AI systems are not subject to the strict requirements that the regulation places on high-risk AI systems. Most players are therefore likely to either only have to fulfil transparency obligations or are not subject to any requirements at all.

This article is intended to provide you with a summary of whether and which requirements must be observed under the AI Act if you work with artificial intelligence for business purposes.


Risk-based approach

This initially depends on how the AI system in question is categorised by the regulation. The AI Act follows a risk-based approach, according to which AI systems are categorised into four different risk groups based on their area of application and purpose. Accordingly, there are AI systems with unacceptable, high, limited and minimal risk. In principle, the regulation addresses providers of AI systems if they place the system on the market or put it into operation in the EU, on the one hand, and their deployers ("deployers") on the other, with the exception of purely private use. Accordingly, the specific requirements to be implemented vary not only according to the type or risk of the system, but also according to your role in relation to the system.


Provider vs. deployer

Providers: The majority of the legal requirements are placed on the providers of AI systems. According to the legal definition in Art. 3 para. 1 no. 2 AI-Act, a "provider" is a natural or legal person who develops an AI system or has one developed with a view to placing it on the market or putting it into service in his own name or under his own brand. It is irrelevant whether the system is offered for a fee or free of charge.

Deployer: According to Art. 3 para. 1 no. 4, deployers are defined as natural or legal persons who use an AI system under their own responsibility. However, purely private use is excluded, meaning that private actors are not subject to the regulation.

In both cases, the regulation expressly clarifies that authorities and other public bodies are also covered by the terms.


Prohibited AI-Practices

The AI Act initially prohibits AI practices that it considers to pose an unacceptable risk. The ban applies not only to the providers, but also to the deployers of such practices. The regulation specifically lists which practices are prohibited, focussing on those that are particularly invasive of fundamental rights. These include, for example, practices for subconsciously influencing the behaviour of natural persons or social scoring systems.

However, prohibited AI practices are likely to be rare in everyday business life. High-risk systems are far more relevant and the central subject of the regulation.


High-risk AI systems

Most of the requirements of the AI Act relate to high-risk systems. In order to check which specific requirements you have to fulfil, it is therefore first necessary to determine whether your system is classified as high-risk by the regulation.

Classification as an high-risk system

The categorisation can cause difficulties in individual cases, as the legislator has developed a complex classification system instead of a clear definition.

High-risk systems include AI systems that are used as a safety component of a product that is the subject of the EU legal acts listed in Annex II or are themselves such a product. The legal acts listed in Annex II concern, for example, machinery, toys or medical devices.

On the other hand, AI systems are considered high-risk if their scope of application is listed in Annex III of the regulation. These include, in summary:

  • (1.) AI applications in critical infrastructures, such as transport and energy supply, which harbour safety risks;

  • (2.) AI systems in education, vocational training, employment and human resource management that have a significant impact on people's lives; and

  • (3.) AI applications in essential public services and criminal justice where they could affect fundamental rights.

However, it should be noted that the final version of the regulation now provides for a new exemption. According to this, despite being classified in Annex III, a system is not considered high-risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons and does not materially influence decision-making processes. Ultimately, this excludes AI systems that only carry out subordinate auxiliary activities.

Whether the exemption applies in a specific case can be assessed by the provider of the system itself, whereby the reasons for the decision must be documented.

So that deployers do not bear the risk of an incorrect assessment by the provider, they should always check for themselves whether the system falls under Annex III and, in case of doubt, ask for the provider's documented justification if the provider considers the system to be exempt from the obligations.

If the AI system in question is considered high-risk according to the above, then a large number of requirements must be met, whereby a distinction must be made as to whether you are acting as a provider or as a deployer of the system.

Obligations for providers of high-risk AI systems (Art. 16 AI Act)
  • Establishment of a risk management system: The risks associated with AI must be continuously identified, analysed and evaluated in the system. The identified risks must then be countered with suitable measures.

  • Data-Governance: It must be ensured that the AI system uses high-quality training data whose data sets are relevant and "unbiased".

  • Technical Documentation: The documentation must show how the system fulfils the requirements of the regulation. Annex IV of the AI Act lists the minimum information that must be included.

  • Protocolling („Logs“): The provider must ensure that the system automatically logs events and processes, or at least makes this possible for the deployer, so that the key processes within the system are traceable. These so-called "logs" must be retained by the provider.

  • Transparency: The system must be designed with sufficient transparency and provided with (digital) instructions for use so that deployers can operate the system correctly and comply with the provisions of the regulation themselves.

  • Human oversight: It serves to prevent or minimise risks. The provider of the system can either integrate human supervision into the AI system before it is placed on the market or make it technically possible for the deployer of the system to take over supervision themselves.

  • Accuracy, robustness and cyber security: The system must have an appropriate level of accuracy, robustness and cyber security throughout its life cycle.

  • Establishment of a quality management system: The system should ensure compliance with the regulations and be documented in writing and cover, among other things, procedures for risk management, post-market surveillance and the reporting of incidents.

  • Corrective measures: After the system has been placed on the market or put into service, corrective action must be taken if necessary if it is suspected that the system does not comply with the requirements of the AI Act.

  • Provision of information: In the event of requests from authorities and audits, all necessary information must be provided to authorities to demonstrate compliance with the requirements of the regulation.

  • Appointment of an EU-representative: Providers that are not established in the EU must appoint a representative established in the EU in writing in accordance with Art. 25 AI Act before introducing the system to the EU market.

  • Conformity assessment procedure: In accordance with Art. 43 AI Act, the system must undergo the relevant conformity assessment procedure before it is placed on the market or put into operation. The provider must also issue an EU declaration of conformity for the AI system in accordance with Art. 48 AI Act and keep it for 10 years after commissioning or placing on the market.

  • CE conformity marking: The CE conformity marking in accordance with Art. 30 EU Regulation EC 765/2008 must be affixed to the AI system (digitally) to indicate conformity with the AI Act.

  • Registration in the EU database: Before the system is placed on the market or put into operation, it must be registered in the EU database referred to in Art. 60 AI Act.

  • Post-Market-Monitoring: This actively and systematically collects, documents and analyses the relevant data provided by deployers or from other sources on the performance of the AI system over its entire service life.

Obligations for deployers of high-risk-systems (Art. 29 AI Act)
  • Technical and organizational measures: Deployers must first take TOMs to ensure that the AI system is used in accordance with the instructions for use and the other requirements of Art. 29 para. 2-5 AI Act.

  • Human oversight: In the event that the provider has delegated the role of human supervision to the deployer, this task must be performed by a competent person who is sufficiently qualified and supported for this purpose.

  • Quality of input data: Insofar as the input data is subject to its control, the deployer shall ensure that it corresponds to the intended purpose of the AI system.

  • Duty of care: The deployer must use the system in accordance with the instructions for use and provide the manufacturer with the necessary information for post-market-monitoring. If it can be assumed that operation will lead to an unreasonable risk to health, safety or fundamental rights, the system must be taken out of service.

  • Reporting obligations: In the event of decommissioning or other serious incidents, there are various reporting obligations to the provider or dealer of the system.

  • Retention of the logs: The logs automatically generated by the system must be kept by the provider for at least 6 months in order to be able to prove the proper functioning of the system or to enable subsequent checks.

  • Declaration to employees: Employers must inform their employees that they are subject to artificial intelligence if they use it in the workplace.

  • Information obligations: Special information obligations apply if the system makes decisions about natural persons or is used to support such decisions, such as the right to a declaration of a case-by-case decision pursuant to Art. 68c AI Act.

  • Fundamental Rights Impact Assessment: In contrast to what was originally proposed by the EU Parliament, a fundamental rights impact assessment is now only to be carried out by state deployers and private deployers who fulfil state tasks. However, there are exceptions for creditworthiness checks or the pricing of life and health insurance policies.

It should not be overlooked that the provider obligations under Art. 16 AI Act may also apply to deployers in certain cases. This is particularly the case if the deployers

  • places a high-risk AI system on the market or puts it into operation under its own name or brand,
  • makes a significant change to an AI system categorised as high-risk without it losing its status as a high-risk AI system, or
  • makes a substantial change to the intended purpose of another AI system, thereby turning it into a high-risk AI system.

In these cases, the deployer is deemed to be the new provider of the AI system in accordance with Art. 28 para. 1, 2 AI Act, while the old provider is relieved of responsibility. However, the old provider must support the new provider in fulfilling its obligations.


Low-risk AI Systems

The regulation only imposes certain transparency requirements on certain AI systems with particular manipulation risks - regardless of whether the system is categorised as high-risk. From the regulator's perspective, particular risks exist when artificial intelligence generates content or comes into direct contact with natural persons. Special requirements therefore apply to the providers and deployers of such systems.

Obligations for providers of certain AI systems (Art. 52 para. 1 AI Act)
  • AI systems that are intended to interact directly with natural persons must be designed or conceived by the provider in such a way that the natural person is informed that they are interacting with an AI system. By way of exception, this does not apply if this fact is readily recognisable to a reasonable person.

  • AI systems that generate audio, image, video or text material must be labelled as generated or manipulated in a machine-readable format. The solution chosen by the provider to implement these requirements must also be effective, interoperable, robust, reliable and state of the art.

In both cases, the information must be made available to the natural persons concerned in a clear and recognisable manner at the latest at the time of the first interaction or exposure to the AI system.

Obligations for deployers of certain AI systems (Art. 52 para. 2, 3 AI Act)
  • Data subjects must be informed by the deployer of an emotion recognition system or a system for biometric categorisation about the operation of this system.

  • Deployers of AI systems that create or modify images, videos or audio content must disclose that the generated content is not genuine. However, there are exceptions in the area of artistic freedom and satire.

  • Finally, deployers of AI systems who create or modify a text that is published for public information purposes must also disclose this in principle.

As in the case of providers, this information must be made available to the natural persons concerned in a clear and recognisable manner at the latest at the time of the first interaction or exposure.


AI-Literacy as a core requirement for all deployers of AI

Irrespective of the categorisation of AI, the regulation also stipulates that deployers of AI systems should generally have so-called "AI literacy". Accordingly, deployers must ensure that their own staff and other persons involved in the operation and use of AI systems on their behalf have sufficient AI expertise. The existing experience and knowledge of the persons concerned and the context in which the AI system is to be used should be taken into account. The level of expertise required depends on the risk potential of the AI system and the associated obligations.


Special case: General-Purpose-AI

The subject of subsequent amendments during the legislative process was so-called "General purpose AI" (GPAI), formerly referred to as "foundation models". These are models of AI systems that have been trained on a broad database, are designed for general use and can be adapted for a wide range of different tasks. Classic examples of this are GPT-4 from OpenAI and other large language models.

It is worth noting that the regulator only places additional requirements on the providers of GPAI. In addition – i.e. regardless of whether the AI is also categorised as general purpose AI – all other requirements must still be observed, especially those for high-risk AI systems. Deployers of GPAI are therefore not subject to any additional obligations, but they may have to comply with the requirements outlined above.

The obligations for providers of GPAI are listed in Art. 52c para. 1 AI Act and will be briefly outlined here:

  • Technical Documentation: This should be created and continuously updated and must include the training and test procedures of the AI system as well as its evaluation results.

  • Providing information and documentation on how the AI model works: This should enable downstream providers who intend to integrate the GPAI model into their own AI system to understand the capabilities and limitations of the system and fulfill the requirements placed on them.

  • Creation of a policy for compliance with the copyright directive.

  • Publication of a sufficiently detailed summary of the content used for the training of the GPAI model.

In Art. 52a AI Act, the legislator also defines GPAI models with systematic risk and imposes the following additional requirements on them in Art. 52d AI Act:

  • Performance of model evaluations: In this context, adversarial tests to identify and mitigate systemic risks must also be carried out and documented.

  • Assessment and mitigation of potential systemic risks and their sources.

  • Documentation and reporting of serious incidents and related remedial measures.

  • Ensuring an appropriate level of cyber-security.


Outlook

The provisions of the regulation will generally apply 2 years after entry into force. However, some general provisions and the provisions on prohibited AI practices are exceptionally applicable 6 months after entry into force. In addition, the regulations on the classification of high-risk systems and the corresponding obligations only apply after 3 years.

For providers and deployers of AI systems, the first crucial question is whether the AI system in question is to be categorised as high-risk. As the extent of the implementation effort will depend largely on this question and high fines may be imposed in the event of non-compliance with high-risk regulations, the classification should be particularly thorough and legal advice should be sought if in doubt. However, most AI systems will in all likelihood not be considered high-risk, meaning that many companies will be confronted with manageable additional work.

We will wait together until the AI Act comes into force in May. Nevertheless, it is worth starting to implement the requirements now or at least prepare for them.


Legal advice

Simpliant Legal - Wittig, Bressner, Groß Rechtsanwälte Partnerschaftsgesellschaft mbB

Data protection

We will support you in implementing all data protection requirements with the GDPR.

Information security

We support you in setting up a holistic ISMS such as ISO 27001.

Artificial intelligence

We advise you on the integration of AI and develop legally compliant usage concepts.


© 2019 - 2024 Simpliant