← Back to incidents
Yelp's AI Review Filter Allegedly Suppressed Legitimate Small Business Reviews
HighYelp's AI review filtering system allegedly suppressed legitimate positive reviews for small businesses while preserving negative ones, creating financial pressure to buy ads. Class action lawsuits were filed but courts ruled Yelp had no obligation to display reviews fairly.
Category
Bias
Industry
Technology
Status
Resolved
Date Occurred
Jan 1, 2009
Date Reported
Sep 1, 2013
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
financial
Estimated Cost
$50,000,000
People Affected
2,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
judgment defendant
Regulatory Body
Federal Trade Commission
review_filteringalgorithmic_biassmall_businessplatform_liabilitycontent_moderationextortion_claimsrecommendation_algorithm
Full Description
Beginning around 2009, thousands of small business owners began reporting that Yelp's automated review filtering system was systematically suppressing positive customer reviews while allowing negative reviews to remain visible. Business owners alleged this created a pattern where their overall ratings appeared artificially low, driving away potential customers and harming revenue. The controversy centered on Yelp's proprietary recommendation software that determined which reviews were displayed prominently versus filtered into a separate 'not recommended' section.
Multiple business owners reported that Yelp sales representatives would contact them offering advertising packages, with the implication that purchasing ads might improve their review visibility. Businesses that declined to advertise allegedly saw continued suppression of positive reviews. Pest control companies, restaurants, medical practices, and other service businesses documented cases where 4-5 star reviews from verified customers disappeared while 1-2 star reviews remained prominent. Some businesses reported losing 20-50% of their positive reviews to the filtering system.
The issue escalated into multiple class action lawsuits filed between 2010-2013, with plaintiffs arguing Yelp was engaging in extortion by manipulating reviews to coerce advertising purchases. Lead plaintiff Levitt v. Yelp consolidated claims from over 2,000 businesses seeking damages for lost revenue. Businesses provided evidence of correlation between declining to purchase ads and increased filtering of positive reviews. The FTC received hundreds of complaints from small business owners alleging the practice violated consumer protection laws.
Yelp defended its filtering algorithm as necessary to combat fake reviews, arguing that newer businesses and those without established review patterns naturally had more reviews filtered. The company maintained that advertising purchases had no influence on the recommendation software and that correlation was not causation. Yelp's engineering team testified that the algorithm considered factors like reviewer history, review patterns, and account authenticity to determine credibility.
In September 2014, the Ninth Circuit Court of Appeals ruled unanimously in favor of Yelp in Levitt v. Yelp, Inc. The court held that businesses had no property right to positive reviews and that Yelp's manipulation of review display - even if motivated by commercial interests - did not constitute extortion under federal law. The ruling established that review platforms have broad discretion in how they display user-generated content. While the legal challenges were unsuccessful, the controversy led to increased scrutiny of algorithmic bias in recommendation systems and calls for greater transparency in content filtering practices.
Root Cause
Yelp's AI filtering algorithm allegedly exhibited bias by disproportionately filtering positive reviews from businesses that did not purchase advertising while allowing negative reviews to remain visible. The algorithm's training data and parameters may have been influenced by commercial considerations rather than pure review authenticity.
Mitigation Analysis
Independent algorithmic auditing could have detected systematic bias in review filtering patterns. Transparent disclosure of filtering criteria and regular bias testing across business categories would have identified disparate impact. Human oversight of filtering decisions for high-stakes business reviews could have caught algorithmic bias before widespread harm occurred.
Litigation Outcome
Ninth Circuit Court of Appeals ruled in favor of Yelp in 2014, stating businesses had no right to positive reviews and Yelp could manipulate reviews without legal liability
Lessons Learned
The case highlighted how AI systems can create systematic bias even when not explicitly programmed to discriminate, and how the lack of algorithmic transparency makes it difficult for affected parties to prove harmful bias. It also demonstrated the legal challenges in regulating platform algorithms that significantly impact business operations.
Sources
Yelp Wins Bid to End Manipulation Class Action
Courthouse News Service · Sep 2, 2014 · news
FTC Staff Finds Insufficient Evidence to Support Claims Against Yelp
Federal Trade Commission · Sep 2, 2014 · regulatory action
Yelp can give good reviews to paying customers, court rules
Washington Post · Sep 3, 2014 · news