The use of artificial intelligence (AI) in human resources offers both significant opportunities and challenges. AI can streamline recruitment, enhance decision-making, and improve efficiency in HR processes. However, it also raises concerns regarding data protection, transparency, and fairness in automated decision-making. Companies must navigate these risks carefully to leverage AI responsibly while ensuring compliance with legal and ethical standards. The following section presents the main legal principles, upcoming regulations and practical issues, followed by current developments and recommendations for action.
Data Protection Requirements
Data protection requirements (GDPR and BDSG): The processing of personal data is subject to strict requirements under the EU General Data Protection Regulation (GDPR) and the German Federal Data Protection Act (BDSG). In the employment context, the opening clause of Art. 88 GDPR applies, according to which member states may adopt specific rules for employee data. Germany has made use of this with Section 26 of the Federal Data Protection Act (BDSG), which allows the processing of personal data for the purposes of the employment relationship. However, on March 30, 2023, the European Court of Justice (ECJ) ruled that a state law regulation identical to Section 26 (1) sentence 1 of the Federal Data Protection Act (BDSG) violates the GDPR. As a result, Section 26 of the BDSG is now only applicable to a limited extent; since then, the permissibility of data processing in the employment relationship has been based primarily and directly on the principles of the GDPR (e.g. Art. 6 GDPR). Therefore, employers must check whether a GDPR legal basis (such as contract fulfillment, legal obligation or legitimate interest) applies to every AI application in the HR area and must comply with the strict data protection principles (purpose limitation, data minimization, integrity, etc.).
Processing of personal data in the HR sector: Typical HR data (from application documents to performance reviews) are considered personal data and may only be processed with AI under certain conditions. Art. 6 (1) GDPR lists the permissible legal bases. In the employment relationship, the following are particularly relevant:
Performance of a contract (Art. 6 (1) (b) GDPR): if data processing is necessary for the performance or initiation of the employment contract, e.g. payroll or organization of workflows. AI-supported processes in the application process or during the employment relationship can often be based on this – but only if the use of the AI is suitable, necessary and appropriate for the performance of the contract. An AI-based application that demonstrably delivers unsuitable or distorted results would therefore not be permissible because it lacks suitability and more data protection-friendly alternatives are preferable.
Legitimate interest (Art. 6 para. 1 lit. f GDPR): if neither a contract nor a legal obligation applies, an employer can base the use of AI on legitimate interests. In this case, a strict balancing of interests is required: the legitimate interest of the company (e.g. objectification of selection decisions or increased efficiency through AI) must outweigh the interests of the employees that are worthy of protection. The expectation of the data subjects, alternative less intrusive means and the transparency of the use are among the factors that are taken into account in the balancing of interests. Particularly high standards must be applied in the employment relationship, since there is a relationship of dependency.
Consent (Art. 6 (1) (a) GDPR): In principle, employees or applicants can consent to the processing, but in labor law this is problematic. Due to the relationship of subordination, strict requirements apply to the voluntary nature of consent. Consent is only valid if there is no pressure and the data subject does not suffer any disadvantage if they refuse. Therefore, consent is rarely relied upon in sensitive cases (e.g. automated performance monitoring). The current draft of an Employee Data Protection Act explicitly states the cases in which voluntary consent can be assumed in the employment relationship (e.g. use of employee photos on the intranet, voluntary health offers).
Legal permission/obligation (Art. 6 (1) (c) and (e) GDPR): In certain cases, processing is permitted to fulfill a legal obligation or in the public interest – such as mandatory reporting, occupational safety or in the public sector. These principles play a lesser role in the use of AI in the private sector, but may be relevant (e.g. in the case of AI systems designed to ensure compliance or safety in the workplace on the basis of legal requirements).
In addition, special rules apply to special categories of data. If an AI processes health data, biometric identifiers or information on ethnic origin (e.g. in the context of video interviews or personality tests), the prohibition of Art. 9 GDPR applies. Processing is then only permitted in exceptional cases, e.g. if it is necessary for the exercise of rights or obligations under labor law and appropriate safeguards exist (Art. 9 (2) (b) GDPR, implemented in Section 26 (3) BDSG). In practice, this means that if an AI recognizes emotional states or health characteristics of applicants, for example, this would be inadmissible unless there is explicit consent or a necessity under labor law. Automated decisions based on such sensitive characteristics are highly problematic from a legal point of view and are generally prohibited under the GDPR.
Permitted uses of AI in human resources management: There is currently no specific “AI law” for human resources – AI applications must comply with the existing data protection regulations. The permissibility of AI-supported data processing is therefore based on the legal principles mentioned above. It is important that the use of AI is purpose-specific (used only for legitimate HR purposes) and proportionate. For example, an AI may only process the personal data necessary for the specific purpose and may not collect data excessively. In addition, it must always be checked whether there is no milder, more data protection-friendly means of achieving the purpose. When using AI, it is essential to comply with the principles of Art. 5 GDPR (lawfulness, transparency, data minimization, purpose limitation, accuracy, storage limitation, integrity & confidentiality). As with any data processing, violations can result in substantial fines (up to €20 million or 4% of global turnover according to the GDPR).
In addition, companies should carry out a data protection impact assessment (DPIA) in accordance with Art. 35 GDPR for highly invasive AI systems. In particular, AI applications that enable extensive monitoring or profiling of employees are likely to be classified as high-risk by supervisory authorities – a DPIA is mandatory here in order to systematically evaluate risks and define suitable protective measures. Likewise, privacy by design (Art. 25 GDPR) must be observed: data protection-friendly settings and security measures must be provided for as early as the selection or development of HR-AI systems.
EU AI Regulation (EU AI Act)
The European Union has launched a regulation to govern AI (EU AI Act). This EU AI Regulation takes a risk-based approach and has significant implications for AI systems in human resources. In the HR area, many AI applications will be classified as “high-risk AI systems,” which will result in stricter requirements and regulatory oversight.
Planned regulations for high-risk AI in human resources: AI systems that decide on hiring, promotions, working conditions or terminations or that are used in personnel selection are considered high-risk. Annex III of the regulation explicitly lists AI for personnel selection and systems that make decisions related to the establishment, termination or promotion of an employment relationship as high-risk applications. It also covers AI systems that assign tasks based on the behavior or characteristics of employees or monitor and evaluate their performance and behavior. This means that automated application screenings, AI-supported employee evaluations and monitoring tools clearly fall into the category of high-risk AI.
The AI Regulation defines strict conditions for such high-risk systems:
- Risk management and data quality: Providers and users must implement a risk management system throughout the entire life cycle of the AI. They must analyze and minimize the risks associated with its use (such as discrimination or wrong decisions). In addition, data governance procedures are required to ensure the quality, representativeness and timeliness of training data. In particular, it must be ensured that the data is as free as possible from bias in order to avoid discrimination. Systematic documentation of the training and test data is required.
Transparency and traceability: High-risk AI in the field of HR is subject to special transparency requirements. Companies must disclose when and where AI is used in HR processes. In particular, the criteria and functioning of the AI must be comprehensible – a black-box AI is not permissible for important personnel decisions. Those affected (e.g. applicants) should be able to understand the basis on which decisions are made. The regulation also requires explainability: a comprehensible explanation of the AI decisions must be available, at least for the supervisory authorities and in the event of queries. AI systems that interact with users (such as chatbots in HR) must alert the user to the fact that they are AI, and so-called deep fakes would have to be clearly marked.
Human oversight: Despite AI support, important decisions must not be made fully automatically and without human influence. The regulation requires mechanisms to ensure that a human being makes the final decision. Operators must ensure that human decision-makers can review the AI's recommendations and intervene or overrule them if necessary. This serves to protect the rights of those affected and corresponds to the prohibition of purely automated decisions under Art. 22 GDPR. It is therefore not permissible to completely outsource critical personnel decisions to an AI.
Documentation and supervision: Extensive technical documentation must be provided for high-risk AI (including a description of the system, purpose, logic, performance characteristics) and usage logs must be kept on an ongoing basis. Every use of such a system should be traceable. In addition, high-risk AI systems must be registered in an EU database. This registration should enable the supervisory authorities to maintain an overview of the market. There are also plans for a conformity assessment procedure: certain AI systems must be certified before they are placed on the market to confirm compliance with all requirements (e.g. in terms of safety, accuracy and robustness). In the human resources area, this means for companies: purchased AI software may only be used if it has a corresponding CE marking or certification, or the manufacturer is subject to strict obligations.
Sanctions: The EU AI Regulation provides for drastic fines for violations – up to 6% of global annual revenue or €30 million (whichever is higher) for serious violations, according to the latest drafts. This means that the possible penalties even exceed the GDPR's maximum limits. Companies must therefore expect double the compliance: In parallel with the data protection authorities, there will be controls under the AI Regulation.
Impact on HR processes: These regulations will require HR departments to adapt or review their AI-based applications. Many common HR AI tools – e.g. software for resume screening, video interview analysis tools, internal talent scoring systems – will fall under the high-risk regime. Companies must ensure that such tools meet the required criteria (e.g. freedom from bias, transparency reports, human-in-the-loop). Contracts with AI providers may need to be adapted to specify documentation requirements or audit rights. Additional steps may be necessary in the recruitment process, e.g. informing applicants if an AI system has been used for pre-selection, or offering an alternative application route without AI filtering. Internally, processes such as employee appraisals or promotion decisions must be designed in such a way that, despite AI evaluation, an independent human evaluation is always carried out, documenting why the AI is followed or not.
Overall, the EU AI Regulation means more work for the HR department in the form of compliance checks, risk analyses and reporting, but it also offers opportunities: companies that implement compliant and fair AI systems at an early stage can create trust among employees and applicants and position themselves as pioneers of ethical AI.
Practical questions when using AI in HR
The practical introduction of AI in human resources raises some key legal questions: How far can automated decision-making go? What level of transparency must be guaranteed? And what rights do employees and the works council have in this regard?
Automated decision-making and legal limits
So-called automated individual decisions (Art. 22 GDPR) – i.e. decisions that are made exclusively by an automated system without human intervention and have legal or similarly significant effects – are extremely critical in the field of human resources. As the Data Protection Conference emphasizes, decisions with legal effect must never be fully automated. A typical example would be an AI system that autonomously evaluates job applications and sends out rejections or invitations without a human resources manager checking them – such a procedure violates Art. 22 GDPR.
According to applicable law, a data subject has the right not to be subject to a decision based solely on automated processing if it produces legal effects concerning him or her or significantly affects him or her (Art. 22 (1) GDPR). In human resources, this includes, for example, automatic rejections of applicants, decisions on promotion or termination based solely on an algorithm, or the fully automatic distribution of bonuses. At least in the final instance, such final decisions must be made or reviewed by a human being.
There are narrow exceptions (for example, if the automated decision is necessary for the conclusion of a contract, is permitted by law or has been expressly consented to, Art. 22 (2) GDPR). However, these exceptions are rarely relevant in HR practice. Obtaining consent from applicants or employees for fully automated decisions is tricky due to the issue of voluntariness and would also require additional protective measures (such as the right to request a human review of the decision within a reasonable period of time, Art. 22 (3) GDPR). In fact, the GDPR already stipulates that important personnel decisions must always be made with human input – which is in line with the requirements of the EU AI Regulation.
Therefore, AI in human resources should primarily serve as a decision support, not as a replacement for decision-makers with personnel responsibility. Companies must ensure that AI suggestions are not adopted without review. This means, for example, that a recruiting algorithm can suggest a ranking of candidates, but the final selection is made by a recruiter who makes changes to the AI suggestion, especially if they notice inconsistencies. Even time or cost pressures must not be allowed to degrade the human part to a mere formality. Rather, decision-makers need real discretion and leeway for evaluation in order to take into account the individual context that an AI cannot grasp.
Transparency and explainability requirements for AI systems
Transparency is a central tenet of both data protection law and the new AI regulation. For employers, this initially means openly communicating with their employees and applicants when AI tools are used in HR processes. The GDPR already requires comprehensive information obligations (Art. 13, 14 GDPR): For example, applicants must be informed in the company's data protection declaration that their data will be processed and possibly profiled by automated systems (such as an applicant management AI). As soon as automated decision-making takes place in an individual case, the person concerned must be informed of this, including meaningful information about the logic and scope of the processing. In practice, data protection information in HR should therefore state, for example: “We use a software system to preselect applications that assesses suitability based on the information you provide. This assessment is included in our decision, but is reviewed again by our HR staff.” Likewise, employees have a right of access (Art. 15 GDPR), which also includes the use of AI and how it works.
Beyond the mere obligation to provide information, supervisory authorities are increasingly demanding explainability of AI decisions. Non-transparent black-box models are rejected in the HR sector, as employees must be able to understand the criteria by which they are evaluated. In practice, this means that companies should favor AI systems that provide interpretable results or at least provide explanatory modules (e.g., feature importance or decision paths for algorithmic decisions). Some AI applications, for example, enable a “reason code” to be given for why a candidate was deemed suitable (e.g., certain qualifications). Such functions make it easier to explain the reasons for rejection to applicants or employees – which is advisable for reasons of fairness and to avoid accusations of discrimination.
The EU AI Regulation also requires transparency measures for high-risk systems: for example, users must be trained to understand the limitations of the AI, and, if necessary, detailed information about the decision-making logic must be provided to the supervisory authorities. While not every algorithm has to be publicly disclosed, in case of doubt an authority or court can demand to see it, especially if there is a suspicion of discrimination. Transparency is therefore also a prerequisite for demonstrating compliance.
Employee rights and co-determination in the use of AI
The use of AI affects various rights of employees and the rights of works councils to be involved (in companies with co-determination):
- Data protection and privacy rights of employees: Employees have the right to have their personal data protected and only processed lawfully. They can request information from their employer about their stored data and, if necessary, demand that it be corrected or deleted (Art. 15–17 GDPR). If AI is used for performance evaluation or behavior control, this affects the general personal rights of employees. In Germany, it is recognized that excessive surveillance or permanent profiling is inadmissible – this is explicitly addressed again in the draft of the Employee Data Protection Act. The draft provides, for example, for specific transparency and labeling requirements when AI is used in the workplace to protect the workforce. In addition, clear rules on employee profiling are to be introduced. This would give employees the right to know where AI is used in the work process (e.g. a notice “This conversation may be supported by language analysis software”). However, even without a new law, it is already advisable to provide transparent information about AI projects internally out of respect for employee rights.
Equal treatment and non-discrimination: A central employee right is protection against discrimination (General Equal Treatment Act, AGG). AI systems must be designed in such a way that they do not cause any impermissible unequal treatment based on, for example, gender, age, origin or disability. Should an AI algorithm systematically disadvantage women or older applicants, for example, this violates applicable law. Employers are liable for such discrimination, even if it was caused by a “black box” model. Therefore, employees have a de facto right to AI that is verifiably free of discrimination. In practice, companies should conduct so-called bias tests before using an HR AI and document the results. If an employee feels that they have been treated unfairly by an AI (e.g. unfair performance assessment), they can file a complaint. In such cases, the employer must be able to demonstrate that the AI system works objectively and in compliance with the law. Here, explainability pays off again in order to be able to disclose the basis for the decision to the employee or a court.
Co-determination rights of the works council: In German companies with a works council, the rights of co-determination under works constitution law apply when using AI. Section 90 (1) no. 3 of the German Works Constitution Act (BetrVG) requires the employer to inform the works council at an early stage about planned technical installations for monitoring employees. AI systems that process employee data regularly fall under this category – for example, new software that records work behavior must be disclosed in the planning phase. Even more important: Section 87 (1) no. 6 BetrVG gives the works council a right of co-determination when technical facilities are introduced that are suitable for monitoring employee behavior or performance. Many AI systems fulfill this criterion because they collect performance data or analyze communication patterns. Without the consent of the works council or a corresponding company agreement, the employer may not introduce such AI. This is confirmed by practice: as soon as the employer itself provides an AI solution or orders its use, there is a co-determination requirement. The right of co-determination does not necessarily apply only if employees use an AI tool purely voluntarily and on their own initiative (e.g. privately on their own accounts) – this was the decision of the Hamburg Labor Court in 2024 in the case of the voluntary use of ChatGPT by employees, since there was no monitoring technology provided by the employer.
Rights to training and participation: The introduction of AI in the workplace could also lead to a right of co-determination with regard to training measures (BetrVG § 98), if new skills are required in dealing with the technology. Employees have an interest in not being “overrun” by technology – they can demand to be adequately trained or retrained if AI changes the way they work. In addition, the works council can consult an expert (Section 80 (3) BetrVG) to assess the AI systems. Overall, it is advisable to reach an agreement with the works council on the use of AI at an early stage. Framework works agreements are often concluded that set out the principles for any use of AI (transparency, purpose, right to a say, data protection) so that lengthy negotiations are not required for each new tool. Such an agreement creates clarity for both sides.
In summary, employees enjoy comprehensive protection when it comes to the use of AI: no employee may be at the mercy of an opaque, solely decision-making AI; they have a right to information and fairness; and employee representatives have a significant say in the introduction of AI systems. Companies should not see these rights as an obstacle, but as an important input for the sustainable and accepted use of AI.
Current developments and practical recommendations
New rulings and regulatory guidelines: Case law and administrative practice regarding AI in labor law are developing dynamically. One example is the decision of the Hamburg Labor Court on January 16, 2024 (ArbG Hamburg, decision of January 16, 2024 – 24 BVGa 1/24), which clarified that the mere permission for employees to use an AI tool such as ChatGPT on a voluntary basis does not require co-determination. Only if the employer itself introduces an AI system or prescribes its use, the works council must be involved. This judgment underscores that companies must carefully define whether a technology is considered an “own” company tool or only concerns an optional use.
Data protection authorities have also published guidance. The Data Protection Conference (DSK) issued guidance on AI and data protection in 2023/24, which emphasizes, for example, that no automated final decisions may be made in the application process and that controllers must provide detailed information about AI decision-making logic. A possible revision of employee data protection in Germany (draft of an employee data protection law, as of October 2024) also addresses current developments in the field of artificial intelligence. Among other things, the draft contains provisions on the use of AI in the workplace, such as transparency obligations (employees must be informed when AI is used for decision-making) and rules on profiling. Whether and in what form these regulations will be implemented remains to be seen, but the discussion shows a clear tendency: the use of AI in the world of work should be more strictly regulated in order to both enable innovation and protect personal rights.
Recommendations for companies: In view of the requirements described, companies should proactively take measures to use AI in human resources in a legally compliant manner. Important recommendations are:
- Review and document use: Get an overview of where AI systems are already in use in HR processes (or are planned). Perform a compliance check for each system: Does it comply with current data protection rules and the requirements of the AI Regulation? High-risk AI should be closely examined and, if necessary, adapted if the legal criteria are not met. Document the results of this assessment. In particular, your data protection impact assessment should document why the use of AI is necessary and proportionate.
Ensure transparency: Develop guidelines for the transparent use of AI in your human resources department. Inform applicants on your application form or portal if AI is used to help with the pre-selection process. Internally, employees should know which tools are allowed to be used and how they work. Make sure that no systematic discrimination occurs – test your algorithms for unwanted bias. Establish a four-eyes principle (human + AI) for important AI decisions. This openness also pays off in terms of reputation: companies that use AI in a transparent way strengthen trust and thus their “employer branding”.
Train employees and the HR team: It is crucial that the HR department develops expertise in dealing with AI. Train your HR managers in how AI systems work, where their technical and legal limits lie, and how to interpret results. Only when the HR team understands AI can it act responsibly, recognize wrong decisions, and provide explanations to the affected employees if necessary. Likewise, employees should be made aware in general terms of what AI can do and where caution is advised – e.g. to avoid data protection violations, they should not enter any confidential HR data unchecked into external AI tools (cloud services).
Involve the works council at an early stage: If available, involve the works council in AI projects at an early stage. At the latest, the works council should be informed during the planning phase of a new HR tool that contains AI (according to § 90 BetrVG). Ideally, you should work together to develop a company agreement that regulates its use (purpose, type of data, evaluation criteria, access rights, duration of storage, etc.). This will help to prevent conflicts. Co-determination can also help to identify blind spots (e.g. acceptance issues among the workforce) at an early stage.
Legal advice and monitoring: As the legal situation continues to develop, it is advisable to seek legal advice, particularly with regard to the AI Regulation. An individual risk analysis by experts can help to avoid expensive mistakes (and fines). Industry information from associations (e.g. Bitkom guides) can provide orientation.
Establish AI guidelines and governance: Develop internal AI guidelines or a code of ethics for AI use in human resources. Determine which applications are allowed and which are off limits (e.g., a ban on facial recognition for employee monitoring unless required by law). These guidelines should also cover aspects such as data security, access restrictions and responsibilities. As the example of ChatGPT shows, it makes sense to establish clear “rules of the game” to prevent data protection and copyright infringements resulting from careless use of AI by employees. A governance team (including a data protection officer, IT security, HR and legal) can regularly check whether AI applications comply with the guidelines.
Conclusion
The legally compliant use of AI in human resources requires a balanced interplay between data protection, labor law and AI compliance. Companies should closely monitor legal developments – from the GDPR and national regulations to the EU AI Regulation. The legal requirements are manageable and should not prevent the use of AI. With early adaptation of processes, transparent communication and clear responsibilities, the opportunities offered by AI can be exploited without unsettling employees or taking legal risks.