← Back to incidents

Spain's VioGén Algorithm Criticized After Fatal Domestic Violence Cases Misclassified as Low Risk

Critical

Spain's VioGén algorithm for assessing domestic violence risk was criticized after multiple fatal incidents where victims were classified as low risk and received inadequate protection.

Category
Bias
Industry
Government
Status
Ongoing
Date Occurred
Jan 1, 2019
Date Reported
Dec 15, 2019
Jurisdiction
Spain
AI Provider
Other/Unknown
Model
VioGén Risk Assessment Algorithm
Application Type
embedded
Harm Type
physical
People Affected
1,000
Human Review in Place
Yes
Litigation Filed
No
domestic_violencerisk_assessmentgender_biasgovernment_algorithmpolicespainfatal_incidentsbias_audit

Full Description

Spain's VioGén (Violence Risk Assessment System) algorithm has been used since 2007 by the Spanish National Police and Civil Guard to evaluate the risk level of domestic violence cases and determine the appropriate level of protection for victims. The system assesses cases across five risk levels (unappreciable, low, medium, high, and extreme) based on various factors including the perpetrator's criminal history, the victim's circumstances, and incident characteristics. Police officers input data through questionnaires, and the algorithm generates risk scores that determine protection measures ranging from occasional police checks to 24-hour protection. In 2019, serious concerns emerged about the algorithm's effectiveness and bias after feminist organizations and researchers documented multiple cases where women were killed by their abusers despite being classified as low risk by VioGén. Analysis revealed that the algorithm's methodology contained systematic biases that disadvantaged certain groups of women. The system assigned lower risk scores to cases involving unemployed women, women with children, and immigrant women, reflecting historical police data that underrepresented violence against these vulnerable populations. Research conducted by Data for Feminism and other advocacy groups found that the algorithm's risk factors were based on traditional police statistics that historically underestimated threats to marginalized women. The system's reliance on formal complaints and documented incidents meant that women who faced barriers to reporting - such as immigrants without legal status, economically dependent women, or those in isolated situations - were systematically underprotected. Additionally, the algorithm failed to adequately weight psychological abuse and controlling behaviors that are strong predictors of lethal violence. The criticism intensified when data showed that approximately 30% of domestic violence murders in Spain occurred in cases previously assessed as low or unappreciable risk by VioGén. Feminist organizations like the Women's Federation of Progressive Organizations of Spain called for comprehensive reforms, arguing that the algorithm perpetuated gender and racial biases inherent in police data. They advocated for incorporating insights from gender violence specialists and updating risk factors to better reflect the experiences of diverse victims. The Spanish government has since initiated reviews of the system and begun exploring algorithmic bias auditing processes, though comprehensive reforms remain ongoing.

Root Cause

The VioGén algorithm contained inherent gender bias by weighting certain risk factors that disadvantaged women, particularly those who were unemployed, had children, or were immigrants. The system's risk assessment methodology failed to account for the complexity and escalating nature of domestic violence patterns.

Mitigation Analysis

Stronger human oversight with mandatory review of all medium and high-risk assessments by specialized domestic violence experts could have prevented misclassifications. Regular algorithmic audits for bias against protected demographic groups, particularly focusing on socioeconomic and immigration status variables, would have identified discriminatory patterns. Implementation of feedback loops incorporating case outcomes to continuously retrain the model on actual violence incidents rather than historical police data.

Lessons Learned

Algorithmic risk assessment systems in sensitive domains like domestic violence require continuous bias auditing and input from domain experts who understand the social dynamics affecting vulnerable populations. Historical police data may not adequately represent risks to marginalized groups who face barriers to reporting.