← Back to incidents

EU AI Act Passes as World's First Comprehensive AI Regulation Framework

Medium

The EU AI Act became the world's first comprehensive AI regulation in March 2024, establishing risk-based classification system and strict requirements for foundation models and high-risk AI systems.

Category
Other
Industry
Government
Status
Resolved
Date Occurred
Mar 13, 2024
Date Reported
Mar 13, 2024
Jurisdiction
EU
AI Provider
Other/Unknown
Application Type
other
Harm Type
operational
Human Review in Place
Yes
Litigation Filed
No
Regulatory Body
European Parliament and Council
regulationeu_ai_actfoundation_modelscompliancerisk_assessmentgovernance

Full Description

The European Union's Artificial Intelligence Act was formally adopted by the European Parliament on March 13, 2024, following nearly four years of legislative development that began with the European Commission's initial proposal in April 2021. The Act represents the world's first comprehensive regulatory framework for artificial intelligence, establishing a risk-based approach that categorizes AI systems into four risk levels: minimal risk, limited risk, high risk, and unacceptable risk. The legislation underwent significant evolution during the negotiation process, particularly following the emergence of generative AI systems like ChatGPT in late 2022. Initially focused on traditional AI applications, the Act was expanded to include specific provisions for general-purpose AI models (GPAI) and foundation models. The final compromise, reached in December 2023 after intense trilogue negotiations between the Parliament, Council, and Commission, included tiered obligations for foundation models based on computational thresholds. A key point of contention emerged between France and Germany, who advocated for lighter regulation of foundation models to protect their domestic AI champions, versus other member states and the Parliament who pushed for stricter oversight. The final compromise established that models requiring more than 10^25 floating-point operations for training face additional obligations, including model evaluation, systemic risk assessment, and incident reporting. Models posing systemic risks face even stricter requirements including red-teaming and external audits. The Act prohibits certain AI practices deemed unacceptable, including social scoring systems, real-time facial recognition in public spaces (with limited exceptions for law enforcement), and AI systems that exploit vulnerabilities of specific groups. High-risk AI systems, including those used in critical infrastructure, education, employment, and law enforcement, must undergo conformity assessments, maintain detailed documentation, ensure human oversight, and meet accuracy and robustness requirements. Implementation follows a staggered timeline: prohibitions on unacceptable AI practices take effect six months after entry into force, codes of practice for foundation models must be developed within nine months, and the full regulatory framework becomes applicable 24 months after adoption. The Act grants enforcement powers to national authorities and establishes fines up to €35 million or 7% of global annual turnover for the most serious violations. Market surveillance authorities will monitor compliance, while the European AI Office will oversee foundation model compliance and coordinate enforcement across member states.

Root Cause

Legislative response to rapid AI development and perceived risks, requiring comprehensive regulatory framework for AI systems across all sectors.

Mitigation Analysis

The Act establishes mandatory human oversight requirements for high-risk AI systems, requires conformity assessments and CE marking, and mandates risk management systems. Provenance tracking through documentation requirements and algorithmic impact assessments could help ensure compliance and reduce regulatory violations.

Lessons Learned

The EU AI Act demonstrates how regulatory frameworks can evolve rapidly to address emerging technologies, requiring flexible approaches that balance innovation with risk mitigation. The lengthy negotiation process highlights the complexity of regulating cross-cutting technologies and the importance of international coordination as AI systems become increasingly global.

Sources

Artificial Intelligence Act: MEPs adopt landmark law
European Parliament · Mar 13, 2024 · regulatory action
A European approach to artificial intelligence
European Commission · Mar 13, 2024 · regulatory action