← Back to incidents

EU Opens First Formal Investigation Under AI Act Against Meta for Recommender System Practices

Medium

EU Commission opened first formal AI Act investigation against Meta regarding potential non-compliance of social media recommender systems with transparency and risk assessment requirements under the new regulation.

Category
Other
Industry
Technology
Status
Under Investigation
Date Occurred
Dec 1, 2024
Date Reported
Jan 15, 2025
Jurisdiction
EU
AI Provider
Other/Unknown
Application Type
embedded
Harm Type
regulatory
Human Review in Place
Unknown
Litigation Filed
No
Regulatory Body
European Commission
EU AI ActMetarecommender systemsregulatory enforcementsocial mediaalgorithmic transparencyhigh-risk AI

Full Description

The European Commission initiated its first formal investigation under the EU AI Act in January 2025, targeting Meta Platforms for potential violations related to its social media recommendation algorithms. The investigation focuses on whether Meta's content recommendation systems, which determine what users see in their Facebook and Instagram feeds, comply with the Act's requirements for high-risk AI systems. Under the AI Act, which became fully enforceable in August 2024, certain AI applications are classified as high-risk and subject to stringent transparency, documentation, and oversight requirements. The Commission's investigation centers on three key areas of potential non-compliance. First, whether Meta has conducted adequate conformity assessments and risk management procedures for its recommender systems as required under Articles 9-15 of the AI Act. Second, the investigation examines if Meta has implemented sufficient transparency measures, including providing clear information to users about how the AI systems influence content delivery. Third, regulators are scrutinizing whether Meta has established appropriate human oversight mechanisms to monitor and intervene in the AI system's operations when necessary. The probe was triggered by complaints from digital rights organizations and academic researchers who argued that Meta's recommendation algorithms significantly impact user behavior and democratic discourse, thus qualifying as high-risk AI systems under the Act's provisions. These systems influence billions of users' information consumption patterns and have been linked to concerns about filter bubbles, misinformation spread, and mental health impacts, particularly among younger users. Meta has publicly stated it believes its recommendation systems do not fall under the high-risk category as defined by the AI Act, arguing they are general-purpose systems rather than specific high-risk applications. The company maintains it has implemented robust content moderation and user control features that exceed regulatory requirements. However, the Commission's preliminary assessment suggests that the scale, reach, and societal impact of these systems may indeed trigger high-risk classification requirements. If violations are confirmed, Meta could face fines of up to 6% of its global annual revenue under the AI Act's penalty framework, potentially reaching several billion dollars. The investigation represents a significant test case for how EU regulators will interpret and enforce the world's first comprehensive AI regulation. Industry observers note that the outcome will likely set precedents for how other major tech companies' AI systems are evaluated under the new framework. The investigation is expected to conclude by mid-2025, with the Commission having indicated it will provide interim guidance to other AI system operators based on its findings. This case marks a pivotal moment in global AI governance, as other jurisdictions closely monitor the EU's enforcement approach to inform their own regulatory strategies.

Root Cause

Meta's recommender systems potentially classified as high-risk AI under the EU AI Act may not have adequate transparency measures, risk assessment documentation, or human oversight mechanisms required by the regulation.

Mitigation Analysis

Comprehensive AI governance frameworks including mandatory conformity assessments, detailed risk management systems, and transparent algorithmic impact documentation could ensure compliance. Proactive legal review of AI systems against regulatory requirements and implementation of auditable AI development processes would prevent regulatory violations.

Lessons Learned

This investigation demonstrates the EU's commitment to enforcing the AI Act against major technology companies and establishes important precedents for classifying social media recommender systems as high-risk AI. It highlights the need for proactive compliance strategies and clear regulatory guidance on AI system classification.

Sources