← Back to incidents

UC Berkeley Study Finds Algorithmic Mortgage Lenders Discriminate Against Minority Borrowers

High

UC Berkeley researchers found algorithmic mortgage lenders charged minority borrowers 5.3 basis points more in interest rates than similarly qualified white borrowers, affecting 1.7 million borrowers annually and resulting in $765 million in excess payments.

Category
Bias
Industry
Finance
Status
Ongoing
Date Occurred
Jan 1, 2009
Date Reported
Nov 1, 2018
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
api integration
Harm Type
financial
Estimated Cost
$765,000,000
People Affected
1,700,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
settled
Regulatory Body
Consumer Financial Protection Bureau
Fine Amount
$43,600,000
mortgage_lendingalgorithmic_biasfair_lendingCFPBdiscriminationfintechproxy_variablesregulatory_enforcement

Full Description

In November 2018, UC Berkeley researchers Robert Bartlett, Adair Morse, Richard Stanton, and Nancy Wallace published groundbreaking research in the National Bureau of Economic Research demonstrating systematic discrimination by algorithmic mortgage lenders against minority borrowers. The study analyzed 2.4 million mortgage applications from 2009-2015, finding that both face-to-face and algorithmic lenders charged Latino and African American borrowers higher interest rates than similarly qualified white applicants. The research revealed that algorithmic lenders charged minority borrowers an additional 5.3 basis points in interest rates compared to white borrowers with equivalent credit profiles and risk characteristics. Face-to-face lenders showed similar discrimination patterns, charging 5.0 basis points more to minority applicants. The study estimated this discriminatory pricing affected approximately 1.7 million borrowers annually, resulting in $765 million in excess interest payments across the mortgage market. The discrimination occurred despite algorithmic lenders not explicitly using race or ethnicity as input variables in their decision-making models. Instead, the algorithms incorporated other data points that served as proxies for protected characteristics, such as neighborhood demographics, shopping patterns, and credit history that correlated with racial and ethnic background. This proxy discrimination perpetuated historical lending biases in a seemingly neutral technological framework. The Consumer Financial Protection Bureau responded to the research findings by intensifying enforcement actions against discriminatory lending practices. Between 2018-2021, the CFPB secured multiple settlements totaling over $43 million from lenders including Hudson City Bancorp ($33 million), BancorpSouth ($10.6 million), and other financial institutions for discriminatory mortgage lending practices. The agency also issued guidance requiring lenders to monitor their algorithmic systems for discriminatory outcomes and implement corrective measures when bias is detected. The study's methodology involved comparing loan pricing for borrowers with similar credit scores, debt-to-income ratios, loan amounts, and property characteristics while controlling for geographic and temporal factors. The researchers found that algorithmic lenders' discrimination was particularly pronounced in areas with higher minority populations, suggesting the algorithms learned and amplified existing market biases rather than eliminating human prejudice as proponents had claimed. The findings sparked broader regulatory scrutiny of algorithmic decision-making in financial services, with multiple federal agencies developing fairness testing requirements and bias monitoring protocols for automated lending systems. The incident demonstrated how artificial intelligence systems could perpetuate and scale discriminatory practices through seemingly objective mathematical models, leading to calls for algorithmic auditing and fairness-by-design requirements in financial technology.

Root Cause

Algorithmic lending models incorporated variables that served as proxies for race and ethnicity, perpetuating historical discrimination patterns despite not explicitly using protected characteristics as inputs.

Mitigation Analysis

Algorithmic fairness auditing with demographic impact analysis could have identified discriminatory outcomes. Implementing fairness constraints in model training, removing proxy variables correlated with protected characteristics, and establishing ongoing bias monitoring systems would reduce discriminatory pricing. Human review of algorithmic decisions for applications involving minority borrowers could provide additional oversight.

Litigation Outcome

Multiple settlements with fintech lenders including Hudson City Bancorp ($33 million), BancorpSouth ($10.6 million), and others for discriminatory lending practices

Lessons Learned

Algorithmic systems can perpetuate and amplify historical discrimination even when protected characteristics are not explicitly used as inputs, requiring proactive bias testing and fairness constraints. The scale and automation of AI systems means discriminatory outcomes can affect millions of people systematically, making prevention more critical than remediation.

Sources

Consumer-Lending Discrimination in the FinTech Era
National Bureau of Economic Research · Nov 1, 2018 · academic paper
CFPB Reaches $33 Million Settlement with Hudson City Bancorp for Discriminatory Mortgage Lending
Consumer Financial Protection Bureau · Sep 24, 2015 · regulatory action