Data Protection lawyers with 50+ years of experience

Free initial consultation
/insights

Updated Friday, April 28, 2023

Updated Friday, April 28, 2023

Preliminary agreement on European “Artificial Intelligence Act” reached

European Parliament reaches provisional deal on world's first "AI-Law"

Steffen Groß

Partner (Attorney-at-law)

Ban on emotion recognition, biometric identification and predictive policing
High-risk AI classification criteria defined
Safeguards for detecting biases in processing sensitive data for high-risk AI models
General principles for all AI models

Get assistance from our lawyers

Data Protection compliance can be complicated. Let our experienced team simplify it for you.

Free initial consultation

The members of the European Parliament have reached a provisional political deal on the world's first “AI-Law” on 27/04/2023. The “Artificial Intelligence Act” aims to regulate AI based on its potential to cause harm. The text may still undergo minor adjustments ahead of a key committee vote scheduled on 11 May. The deal requires all groups to support the compromise without the possibility of tabling alternative amendments. The EU lawmakers are voting on the political agreement on the AI Act on 26 April, with critical issues still open.


Ban on emotion recognition, biometric identification and predictive policing

The use of emotion recognition AI software is prohibited in law enforcement, border control, workplace, and education. The ban on biometric identification software has also been expanded.

The possibility of using AI for purposes of preventive policing ("predictive policing") is also to be restricted. The AI regulation bans "purposeful" manipulation, which was maintained despite debates over proving intentionality.


High-risk AI classification criteria defined

The AI Act's high-risk classification will only apply to AI models that pose a significant risk of harm to health, safety, or fundamental rights. This includes AI used to manage critical infrastructure with a severe environmental risk. The severity, intensity, probability of occurrence, and duration of effects will be taken into account to determine if the risk is significant.


Safeguards for detecting biases in processing sensitive data for high-risk AI models

The European Parliament has included additional safeguards to detect negative biases while processing sensitive data such as sexual orientation or religious beliefs for high-risk AI models. Processing such data must happen in a controlled environment and the bias must not be detectable by processing synthetic, anonymized, pseudonymized or encrypted data. The sensitive data cannot be transmitted to other parties and must be deleted following the bias assessment, and the providers must document why the data processing took place.


General principles for all AI models

The members of the European Parliament proposed general principles that would apply to all AI models, including human agency and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination, and fairness.

High-risk AI systems will also have to keep records of their environmental impact and comply with European environmental standards. These principles are not meant to create new obligations but will be incorporated into technical standards and guidance documents.

Source: https://www.euractiv.com/section/artificial-intelligence/news/meps-seal-the-deal-on-artificial-intelligence-act/

Legal advice

Simpliant Legal - Wittig, Bressner, Groß Rechtsanwälte Partnerschaftsgesellschaft mbB

Data protection

We will support you in implementing all data protection requirements with the GDPR.

Information security

We support you in setting up a holistic ISMS such as ISO 27001.

Artificial intelligence

We advise you on the integration of AI and develop legally compliant usage concepts.


© 2019 - 2024 Simpliant