Data Protection lawyers with 50+ years of experience

Free initial consultation
/insights

Updated Wednesday, June 7, 2023

Updated Wednesday, June 7, 2023

Current developments in AI regulation

The AI Act as a European Approach to the Regulation of Artificial Intelligence.

Boris Arendt

Salary Partner (Attorney-at-law)

Leon Neumann

Scientific Research Assistant

Introduction
Current status of regulatory development
Artificial Intelligence Act (AI Act)
Directive for AI Liability
Further regulations
Outlook

Get assistance from our lawyers

Data Protection compliance can be complicated. Let our experienced team simplify it for you.

Free initial consultation

The emergence of ChatGPT in early 2023 has reignited discussions about the use of artificial intelligence. Meanwhile, the EU Commission has been working on an AI Act to provide a legal framework for artificial intelligence for several years. On May 11, 2023, the leading parliamentary committees of the European Parliament gave the green light for the law. This clears the way for adoption in plenary in mid-June.


Introduction

Artificial intelligence is increasingly finding its way into our everyday lives. Its use can optimize decision-making processes and work procedures in many areas of life and enables approaches to solutions that would have been almost inconceivable before. However, the rapid technical development and the steadily increasing use of AI are also revealing more and more risks, which can be relevant to fundamental rights, associated with the use of artificial intelligence.

These become particularly apparent when AI is used in the context of facial recognition, surveillance, and health technologies. In recent weeks, however, debates and media reports have focused primarily on the chatbot "ChatGPT" developed by California-based OpenAI.

These examples give an idea of the overall societal consequences that artificial intelligence can have if it is used by large numbers of the population. The more AI gains authority and influence, the greater the risk that it will be instrumentalized for harmful purposes, such as influence public opinion and democratic discourse. This is not the only reason why it is necessary to regulate the use of artificial intelligence.

This is also the opinion of Sam Altmann, one of the founders of OpenAI:


Current status of regulatory development

The EU Commission became aware of the risks of AI several years ago, and in 2018 developed a strategy to regulate and at the same time promote the development of artificial intelligence. Specifically, a proposal for the Artificial Intelligence Act ("AI Act") was submitted in spring 2021. The aim of the draft, in addition to promoting the development of AI, is to ensure adequate protection of fundamental rights by creating a legal framework and to create a basis of trust when dealing with artificial intelligence.

In response to the AI hype triggered by ChatGPT and the associated risks, amendments to the Commission proposal were introduced by the parliamentary committees of the European Parliament on May 11, 2023. This was preceded, among other things, by negotiations on stricter requirements for generative general-purpose AI, so that fundamental rights, including freedom of expression, must be taken into account in its development. In addition, further obligations and transparency requirements for providers of so-called foundation models such as ChatGPT were included. In addition, general principles such as technical security, transparency and data governance are to apply to all AI systems, but without developing new requirements. Rather, the principles are to be included in future technical standards and guidance documents.

In addition to the AI Regulation, a Commission proposal for a directive on AI liability has been on the table since fall 2022, which is intended to make it easier for injured parties of AI systems to file lawsuits.

Efforts are also underway in the U.S. to create specific regulations for artificial intelligence. For example, proposals for an "AI Risk Management Framework" and an "AI Bill of Rights" were presented in 2022. In contrast to the European approach, however, these merely serve as non-binding guidelines and thus rely in principle on the voluntary implementation of the specifications by the addressees.


Artificial Intelligence Act (AI Act)

The AI Act follows a risk-based approach, according to which AI systems are divided into four different risk groups based on their field of application and purpose. Accordingly, there are AI systems with unacceptable, high, limited and minimal risk. On the one hand, the regulation addresses providers of AI systems, if they place the system on the market or put it into operation in the EU, and on the other hand, their users, unless the use is solely for private purposes.

As the legislator assumes, the majority of all AI systems pose a minimal risk. For such systems, which include spam filters, for example, the regulation does not contain any specific requirements.

In contrast, AI systems with unacceptable risk are prohibited under Art. 5 AI Regulation. These include only a few groups of cases, such as the social scoring system (based on the Chinese model) or - with a few exceptions - biometric real-time remote identification systems in public spaces for law enforcement purposes.

Transparency requirements

The regulation imposes certain transparency requirements on certain AI systems with particular risks of manipulation - regardless of whether the system is classified as high-risk. These now also include so-called generative foundation models such as GPT, which is why ChatGPT or Replika are likely to fall into this group. To meet the requirements, it is necessary to make it transparent to the users of such a system that they are interacting with an AI, unless this is obvious from the circumstances. In addition, there will be a registration requirement in an EU database for providers of such models. Whether this will adequately address the risks associated with chatbots remains to be seen. However, the transparency obligations can at least ensure that the risk of so-called "deep fakes" is reduced.

High-risk systems

The core of the regulation is the governance of high-risk systems. This group includes, on the one hand, AI systems that are used as security components in products and, on the other hand, AI-based applications with particular relevance to fundamental rights, such as those in the area of human resources management, in education and training, or in critical infrastructure. High-risk systems, as well as their operators and users, are subject to specific requirements that must be met when using the system in the EU. Some of these requirements are outlined below.

Risk management system

First, the AI system must be accompanied by a documented risk management system that reduces the risk of the AI system to an acceptable level through concrete, state-of-the-art measures. The risk management system is designed as an iterative process and must be updated regularly, whereby the actual and potential risks of the AI system must be continuously identified and weighed. “Appropriate" measures must then be taken to counter the risks identified in this way.

In addition, there are individual documentation and information obligations. Article 13 I AI Act (draft) stipulates, for example, that systems must be designed and developed in such a way that their operation is sufficiently transparent to enable users to interpret and use the results appropriately, without, however, specifying concrete measures by which this objective can be achieved.

Working with the data sets/data protection

Of particular relevance from a data protection perspective is Art. 10 AI Act (draft), which sets requirements for handling the datasets with which the AI is trained. If a high-risk system is trained with data, training, validation and test data sets must be used that must meet certain quality requirements. The datasets are subject to certain management practices according to Art. 10 II AI Act (draft), for example with regard to the collection, evaluation and preparation of data or with regard to any data deficiencies. In addition, according to Art. 10 III 1 AI Act (draft), the data sets must be relevant, representative, error-free and complete. In practice, however, it will be difficult to determine when a dataset is free of errors, as objective quality standards are still lacking.

Article 10 V AI Act (draft) contains a legal basis for the processing of special categories of personal data within the meaning of Article 9 I GDPR, if the processing is strictly necessary for the identification and correction of bias and appropriate safeguards are taken for the fundamental rights and freedoms of the data subjects. A further condition is that the purpose pursued would be significantly impaired by prior anonymization.

At this point, the provision makes use of the general clause from Art. 9 II lit. g GDPR, since the prevention of bias ("distortions") is likely to be considered a public interest, whereby the question arises as to its relevance. However, it remains open when the processing of personal data under Article 10 V AI Act (draft) is strictly necessary and when the purpose would be "significantly impaired" by anonymizing the data.

Post-Market-Monitoring

Furthermore, according to Art. 61 AI Act (draft), the provider is obliged to carry out "post-market monitoring", which serves to collect user data actively and systematically. The data collected in this way is then to be merged in the system with user data from other sources, enabling the provider to continuously monitor user compliance with the regulation. In the event of serious incidents and malfunctions of the system, there is a reporting obligation pursuant to Art. 62 AI Act (draft). According to Art. 61 III AI Act (draft), the post-market monitoring must be based on a plan, which is part of the technical documentation according to Annex IV No. 8 AI Act.

Sanctions

Violations of the Regulation are to be sanctioned by the Member States with fines in accordance with Art. 71 AI Act (draft). The highest penalties are to be expected for violations of Art. 5 AI Act (systems with unacceptable risk) or Art. 10 AI Act (handling of data), namely in the amount of up to EUR 30 million or 6% of the worldwide annual turnover if the violator is a company.


Directive for AI Liability

The AI Liability Directive applies to non-contractual, fault-based, civil damages claims relating to harm caused by AI systems.

One of the key provisions of the Directive is the right to demand a court order to for disclosure of information as potential evidence in the event of damage caused by high-risk systems in accordance with Art. 3 AI liability directive.

The disclosure obligation is limited to a necessary and proportionate extent, whereby the protection of business secrets must be observed. In addition, Art. 4 of the AI Liability Directive contains a rebuttable presumption of causality between fault and the result produced by an AI system or the lack thereof.

Overall, the directive thus takes into account the "black box" problem, i.e. the fact that the decision-making process of an AI cannot regularly be traced and that injured parties therefore have difficulty in proving the facts they have to present in court.


Further regulations

The AI Regulation will not be the sole piece of legislation dealing with artificial intelligence. Since AI-based applications often process personal data, the AI Regulation is also flanked by the provisions of the GDPR in such cases. Here, the principles of data processing, such as data minimization, purpose limitation and transparency, are primarily relevant. In addition, the AI's recourse to certain data may result in violations of any copyrights or personal rights. In addition, labor and IT law, as well as the protection of trade secrets, must be observed by the operators and users of AI, so that there is a wide range of regulations that put a stop to the excessive use of artificial intelligence.


Outlook

With the vote for the AI Act, the parliamentary committees of the EU Parliament on 11.05.2023 have paved the way for adoption in plenary in mid-June. After that, the final phase of the legislative process will begin and negotiations with the EU Council and the Commission (trilogy) will be initiated.

When implementing the regulation, further problems may arise. For example, the requirements placed on high-risk systems must be reconciled with data protection principles. Futhermore, the obligation to log processes during the operation of an AI system under Article 12 of the AI Act conflicts with the principle of data minimization, which is why it will be challenging to implement the requirement in a way that complies with data protection law.

Overall, the Regulation imposes a not inconsiderable burden on providers, users and operators of AI systems. In order to make it easier for them to comply with the regulations, the "European Committee on Artificial Intelligence" described in Art. 56 of the AI Act could be of help, which is to develop opinions, recommendations and guidelines.

Legal advice

Simpliant Legal - Wittig, Bressner, Groß Rechtsanwälte Partnerschaftsgesellschaft mbB

Data protection

We will support you in implementing all data protection requirements with the GDPR.

Information security

We support you in setting up a holistic ISMS such as ISO 27001.

Artificial intelligence

We advise you on the integration of AI and develop legally compliant usage concepts.


© 2019 - 2024 Simpliant