Jacobsen + Confurius

AI in Recruitment Management: Discrimination Risks and Legally Compliant Selection Processes

The use of artificial intelligence in recruiting has become an established component of talent acquisition in many companies. Modern applicant tracking systems integrate AI modules for automated pre-selection, qualification assessment, as well as the prioritization and recommendation of candidates. The objective is to enable more efficient processing of large volumes of applications and to provide structured decision support.

From a legal perspective, however, such use operates in a particularly sensitive area. AI-based recruiting systems directly affect access to employment and implicate fundamental rights. Deficient model logic, inadequate control mechanisms, or non-transparent evaluation criteria may lead to indirect discrimination and trigger significant liability risks. With the entry into force of the EU Artificial Intelligence Act, this regulatory framework has become even more stringent.

  1. System Architecture and Functioning of Modern Recruiting AI

Typical applicant tracking systems operate using a multi-layered architecture. Applicant data are initially collected via career portals, email imports, or CV parsers and transferred into digital applicant files. This is followed by system-based structuring through ranking, matching, and scoring mechanisms.

AI modules regularly supplement these processes through:

  • Automated analysis of application documents;
  • Generation of suitability and prioritization proposals;
  • Recommendation categories for HR personnel;
  • Support for applicant communication;
  • Optimization of job postings and vacancy strategies.

In practice, these outputs are not merely provided for informational purposes but are regularly integrated into subsequent assessments. They structure the sequence, depth, and intensity of human evaluation and thereby shape the decision-making framework of recruitment.

From a legal standpoint, this is decisive: even where the formal hiring decision is made by natural persons, the AI system effectively influences access to employment.

  • Emergence of Discrimination Risks

Discrimination in AI-based recruiting rarely arises intentionally but is usually structural in nature. Key risk factors include:

  • Biased training data reproducing historical selection patterns;
  • Indirect proxy characteristics for protected attributes;
  • Algorithmic weightings lacking transparent justification;
  • Optimization objectives with adverse effects on specific groups;
  • Rigid filtering and threshold mechanisms.

Even such indirect effects may constitute discrimination within the meaning of the German General Equal Treatment Act (AGG). In dispute proceedings, the presentation of prima facie evidence is generally sufficient to shift the burden of proof to the employer. In the case of complex AI models, legal justification then becomes particularly demanding.

  • Profiling and Article 22 GDPR

The use of AI in recruitment management is subject, inter alia, to the requirements of the General Data Protection Regulation (GDPR).

  1. Profiling

Pursuant to Article 4(4) GDPR, profiling occurs where personal data are processed in an automated manner to evaluate or predict personal aspects. This is regularly the case with scoring, ranking, and suitability systems.

AI outputs typically relate to professional aptitude, reliability, performance capacity, or prospects of success. They go beyond the mere reproduction of existing information and constitute independent evaluative inferences. Consequently, the threshold for profiling is regularly exceeded.

  • Distinction from Automated Decision-Making

For the purposes of Article 22 GDPR, it is not decisive whether a human formally makes the decision. The decisive factor is whether the automated assessment is effectively adopted. Where AI recommendations are regularly implemented without independent review, exclusively automated decision-making within the meaning of Article 22 GDPR may exist.

Note: The Court of Justice of the European Union clarified (Case C-634/21, judgment of 7 December 2023) that “solely automated decision-making” may also exist where a human is formally involved but does not independently review the automated assessment and instead adopts it in practice. Mere formal human involvement is therefore insufficient to avoid the application of Article 22 GDPR.

In such constellations, additional safeguards are required, in particular:

  • Effective human review;
  • Opportunity to submit observations;
  • Contestability of the decision
  • Transparency and Purpose Limitation

Applicants must be informed in a clear and comprehensible manner that AI is being used, which data are processed, and what significance the results have. General or vague formulations are insufficient. In addition, data use must be strictly limited to the specific selection purpose.

  • Classification under the EU Artificial Intelligence Act

The EU Artificial Intelligence Act has been in force since 1 August 2024; the requirements for high-risk AI systems under Articles 6 et seq. will apply from 2 August 2026. Companies should use the remaining time for preparation and compliance implementation.

  1. High-Risk Classification

AI systems used to analyse applicant data and derive prioritization recommendations regularly qualify as high-risk AI under Article 6(2) in conjunction with Annex III No. 4(a) of the AI Act. This category expressly covers AI systems used for the recruitment or selection of natural persons, in particular for targeted job advertising, the analysis and filtering of applications, and the evaluation of candidates. The decisive factor is that such systems pre-structure and steer the selection process. For classification as high-risk AI, it is irrelevant that the final decision is formally taken by humans. What matters is the actual influence on access to employment.

  • No Exemption under Article 6(3) AI Act

The exemptions provided for in Article 6(3) AI Act generally do not apply:

  • The systems are not limited to narrowly defined technical assistance tasks;
  • They do not merely enhance completed human activities;
  • They do not serve only downstream control functions;
  • They do not perform purely preparatory neutral functions.

Rather, AI systems generate independent, context-specific recommendations.

  • Allocation of Roles

As a rule, the software manufacturer qualifies as the provider. The deploying company constitutes the operator within the meaning of Article 3(4) AI Act, as it determines the context of use, controls the organisation, and decides on the use of results. A quasi-provider status is considered only in cases of substantial changes to purpose or system design.

  • Deployer Obligations

As an deployer, the company is subject in particular to:

  • Documentation obligations;
  • Risk management duties;
  • Ensuring human oversight;
  • Real-time operational monitoring;
  • Cooperation with supervisory authorities.

These obligations apply irrespective of contractual commitments by the provider.

  • Employment and Collective Labour Law Dimension

In addition to data protection law and the AI Act, employment law requirements must be observed.

With regard to the use of AI in recruitment procedures, Section 95 of the German Works Constitution Act (BetrVG) is primarily applicable insofar as standardized evaluation logics qualify as selection guidelines. Section 87(1) No. 6 BetrVG applies additionally where the system is also capable of monitoring the conduct or performance of HR staff involved in recruitment. This does not apply merely due to the processing of applicant data, as Section 87(1) No. 6 BetrVG is intended to protect employees rather than applicants.

Furthermore, documentation is gaining increasing importance in recruitment processes. The more automated the processes are, the higher the expectation for traceable decision-making bases.

  • Liability and Reputational Risks

In practice, primary liability rests with the employer. Typical risks include:

  • Compensation claims under the AGG;
  • Claims for damages under Article 82 GDPR (asserted civilly by data subjects) and administrative fines under Article 83 GDPR (imposed by supervisory authorities);
  • Sanctions under the AI Act;
  • Injunctive claims by works councils;
  • Loss of trust among applicants and the public.

Recourse claims against providers are possible but only partially mitigate these risks.

  • Concluding Assessment

AI in recruitment management is not merely an efficiency tool but a highly regulated intervention in access to professional opportunities. Systems that evaluate, prioritize, and pre-structure applications regularly qualify as high-risk AI and are subject to strict data protection and employment law requirements. Legally compliant deployment is only ensured where technical design, organisational processes, and legal compliance are consistently integrated. Companies that introduce, document, and control AI strategically can modernise their recruitment processes without incurring significant liability and reputational risks.

Practical Guidance: Legally Compliant Use of Recruiting AI

From a legal advisory perspective, companies should organise the use of AI in recruitment management in a structured, transparent, and auditable manner:

  1. Documentation of Purpose and Context: Document in writing the scope of use, decision logic, and role of AI in the process.
  2. Technical Safeguarding of Human Oversight: Configure systems to exclude automatic rejections and require active review.
  3. Legal Review of Profiling and DPIA: Systematically assess whether profiling occurs and conduct a data protection impact assessment at an early stage.
  4. Review of Provider Compliance: Require transparent information on training data, model changes, bias testing, and audit mechanisms.
  5. Development of a Transparency Framework: Integrate clear notices into career portals, confirmation emails, and user interfaces.
  6. Establishment of Discrimination Testing: Regularly conduct statistical reviews to identify systematic disadvantages for specific groups.
  7. Involvement of the Works Council (where applicable): Clarify selection guidelines and system parameters under collective law prior to implementation, primarily based on Section 95 BetrVG and, where applicable, Section 87(1) No. 6 BetrVG.
  8. Implementation of Continuous Monitoring: Conduct regular reviews, logging, and adjustments.

Contact:
Jens Borchardt

Scroll to Top