← Back to incidents

Retorio AI Hiring Tool Found to Evaluate Candidates Based on Background and Clothing Rather Than Qualifications

High

Bavarian research revealed that Retorio's AI video interview tool evaluated job candidates based on irrelevant factors like clothing and background objects rather than qualifications. The findings highlighted systematic bias in AI hiring tools that could violate anti-discrimination laws.

Category
Bias
Industry
HR / Recruiting
Status
Resolved
Date Occurred
Jan 1, 2020
Date Reported
Dec 1, 2020
Jurisdiction
EU
AI Provider
Other/Unknown
Application Type
other
Harm Type
legal
Human Review in Place
No
Litigation Filed
No
Regulatory Body
Bavarian State Office for Data Protection Supervision
hiringbiasdiscriminationvideo-analysisemploymentregulationretoriobavaria

Full Description

In 2020, researchers from the Bavarian State Office for Data Protection Supervision conducted a comprehensive study of AI-powered video interview analysis tools, including Retorio, a Munich-based company that claimed its technology could assess candidate personality traits and job suitability through video analysis. The investigation revealed deeply problematic biases in how these systems evaluated job applicants. The Bavarian study found that Retorio's AI system made hiring recommendations based on superficial and irrelevant visual characteristics rather than actual job qualifications. Candidates were rated differently depending on whether they wore glasses, head scarves, or other clothing items. The system also factored in background elements visible in video interviews, such as bookshelves, artwork, or home furnishings, into its assessment of candidate suitability. These factors had no legitimate relationship to job performance or qualifications. The research demonstrated that the AI system's decision-making process violated fundamental principles of fair hiring practices. The technology appeared to have learned discriminatory patterns from its training data, potentially reinforcing existing biases against certain demographic groups. For instance, rating candidates differently based on religious head coverings could constitute religious discrimination, while evaluating home backgrounds could disadvantage candidates from different socioeconomic backgrounds. The findings were part of a broader examination of AI hiring tools that had gained popularity during the COVID-19 pandemic as companies shifted to remote recruiting. Similar issues were identified with other AI hiring platforms, including HireVue, which also faced scrutiny for potentially discriminatory algorithms. The Bavarian regulators concluded that such AI systems posed significant risks to fair employment practices and could violate European data protection and anti-discrimination laws. Following the study's publication, regulatory pressure increased on AI hiring tool providers to demonstrate fairness and transparency in their algorithms. The European Union subsequently began developing more comprehensive regulations for AI systems used in high-risk applications like employment decisions. Retorio and similar companies faced demands to audit their systems and provide greater transparency about how their algorithms make hiring recommendations. The incident highlighted the broader challenges of deploying AI in hiring decisions without adequate oversight and testing for bias. It demonstrated how AI systems could perpetuate or amplify discriminatory practices while appearing to provide objective, data-driven hiring recommendations. The case became a significant example in discussions about AI regulation and the need for algorithmic accountability in employment contexts.

Root Cause

AI video analysis algorithms were trained on biased datasets and evaluated superficial visual characteristics like glasses, head scarves, and background objects instead of job-relevant criteria, demonstrating fundamental flaws in feature selection and training methodology.

Mitigation Analysis

Comprehensive algorithmic auditing and bias testing could have identified these discriminatory patterns before deployment. Human oversight of AI hiring decisions, diverse training datasets, and technical controls to mask irrelevant visual features would have prevented discrimination. Regular fairness assessments and transparency requirements for AI hiring tools are essential safeguards.

Lessons Learned

The incident demonstrates that AI hiring tools require rigorous bias testing and ongoing monitoring to prevent discrimination based on irrelevant characteristics. It underscores the importance of regulatory oversight and transparency requirements for AI systems used in high-stakes decisions like employment.

Sources

Bavarian Data Protection Authority Report on AI in Hiring
Bavarian State Office for Data Protection Supervision · Dec 1, 2020 · regulatory action