← Back to incidents
EU AI Act First Enforcement Actions Target Prohibited AI Practices
MediumThe EU AI Act's first enforcement phase began February 2, 2025, prohibiting certain AI practices. Regulatory authorities initiated investigations into companies using banned AI applications like social scoring and workplace emotion recognition.
Category
Other
Industry
Government
Status
Ongoing
Date Occurred
Feb 2, 2025
Date Reported
Feb 15, 2025
Jurisdiction
EU
AI Provider
Other/Unknown
Application Type
other
Harm Type
legal
Human Review in Place
Unknown
Litigation Filed
No
Regulatory Body
European Commission and EU Member State Authorities
EU AI Actregulatory enforcementprohibited AI practicescompliancebiometric systemsemotion recognitionsocial scoring
Full Description
On February 2, 2025, the European Union's Artificial Intelligence Act entered its first enforcement phase, marking a watershed moment in global AI regulation. The initial provisions that took effect included comprehensive prohibitions on AI systems deemed to pose unacceptable risks to fundamental rights and safety. Specifically banned were AI systems for social scoring by public authorities, real-time biometric identification in public spaces (with limited exceptions), emotion recognition in workplaces and educational institutions, biometric categorization systems inferring sensitive personal data, and AI systems exploiting vulnerabilities of specific groups.
Within weeks of the enforcement date, the European Commission and member state authorities began coordinating investigations into suspected violations. Companies across multiple sectors found themselves under scrutiny for existing AI deployments that suddenly fell under the prohibited categories. The enforcement focused initially on the most egregious cases, including workplace surveillance systems using emotion recognition technology and retail environments deploying unauthorized biometric categorization. Several multinational technology companies received formal investigation notices regarding their AI systems' compliance status.
The compliance challenges proved substantial for organizations that had not adequately prepared for the AI Act's requirements. Many companies discovered that AI systems they considered routine business tools actually fell under the Act's prohibited categories. The complexity of determining which specific AI applications were banned created significant interpretation challenges, particularly around the boundaries of emotion recognition and biometric categorization. Legal and compliance teams struggled to audit existing AI deployments and determine which systems required immediate discontinuation.
The enforcement actions had immediate global implications, as multinational companies operating in the EU were forced to reassess their AI practices worldwide. Several major technology platforms announced they would disable certain AI features in EU markets rather than risk violations. The ripple effects extended beyond Europe, with companies in other jurisdictions proactively reviewing their AI systems to avoid similar regulatory challenges. Industry associations called for clearer guidance on compliance requirements, while some companies sought safe harbor provisions through voluntary compliance programs. The first phase of EU AI Act enforcement established a new baseline for acceptable AI practices that influenced regulatory discussions in the United States, United Kingdom, and other major markets.
Root Cause
Companies deployed AI systems for social scoring, emotion recognition in workplaces/schools, or biometric categorization that became prohibited under the EU AI Act's first enforcement phase.
Mitigation Analysis
Companies needed comprehensive AI governance frameworks including prohibited use case identification, legal compliance reviews, and continuous monitoring systems. Proper legal counsel and proactive auditing of existing AI deployments could have prevented violations. Cross-functional teams involving legal, compliance, and technical staff were essential for AI Act readiness.
Lessons Learned
The EU AI Act's enforcement demonstrates the critical importance of proactive compliance planning for AI regulations. Companies must establish robust AI governance frameworks before regulations take effect, as post-implementation compliance proves significantly more complex and costly.
Sources
Regulation (EU) 2024/1689 on Artificial Intelligence
Official Journal of the European Union · Jul 12, 2024 · regulatory action
European Commission Begins AI Act Enforcement with First Investigations
European Commission · Feb 15, 2025 · company statement