← Back to incidents
Getty Images Sues Stability AI for Copyright Infringement in AI Training
HighGetty Images filed lawsuits against Stability AI alleging Stable Diffusion was trained on millions of copyrighted images without permission, seeking billions in damages and setting major precedent for AI training data rights.
Category
Copyright Violation
Industry
Media
Status
Litigation Pending
Date Occurred
Aug 1, 2022
Date Reported
Jan 17, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Model
Stable Diffusion
Application Type
api integration
Harm Type
financial
Estimated Cost
$500,000,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
copyrighttraining_datastable_diffusiongetty_imagesintellectual_propertyfair_usewatermarksgenerative_ailitigation
Full Description
On January 17, 2023, Getty Images filed parallel lawsuits against Stability AI in U.S. federal court in Delaware and the High Court of Justice in London, alleging that the company's Stable Diffusion AI image generator was trained on millions of copyrighted Getty Images without authorization. The lawsuit represents one of the first major copyright infringement cases targeting the training practices of generative AI companies, with potential implications for the entire AI industry.
Getty's legal complaint centers on allegations that Stability AI downloaded and used over 12 million copyrighted images from Getty's collection to train Stable Diffusion models without obtaining licenses or paying compensation. As evidence, Getty presented AI-generated images that contained distorted or corrupted versions of Getty's distinctive watermarks, suggesting the training dataset included watermarked Getty content. The lawsuit argues this constitutes both direct copyright infringement and violation of Getty's trademark rights.
The case raises fundamental questions about fair use in AI training, with Getty arguing that commercial AI training cannot qualify for fair use protections when it involves wholesale copying of copyrighted works for profit. Stability AI has maintained that their training practices constitute fair use under existing copyright law, arguing that the AI learns concepts rather than copying specific images. The company initially responded that they believe their approach falls within established precedents for transformative use.
Beyond monetary damages potentially reaching billions of dollars, Getty is seeking injunctive relief that could force Stability AI to retrain their models using only properly licensed content. The outcome could establish crucial precedent for how AI companies must handle copyrighted training data, potentially requiring industry-wide changes to data collection and model development practices. The case also highlights the tension between rapid AI innovation and traditional intellectual property frameworks, with implications extending to other content creators, publishers, and rights holders worldwide.
Root Cause
Stability AI allegedly scraped and used millions of copyrighted Getty Images in training datasets without permission, licensing, or compensation to rights holders.
Mitigation Analysis
Implementing dataset provenance tracking and copyright filtering before training could have prevented unauthorized use. Establishing licensing agreements with content providers and implementing opt-out mechanisms for rights holders would reduce legal exposure. Technical safeguards like watermark detection and removal of protected content from training data would demonstrate good faith efforts to respect intellectual property.
Lessons Learned
This case demonstrates the critical importance of establishing clear legal frameworks for AI training data usage and the need for proactive rights management in the AI development process. It highlights how existing copyright law may be insufficient to address the scale and nature of modern AI training practices.
Sources
Getty Images lawsuit says Stability AI misused photos to train AI
Reuters · Feb 6, 2023 · news
Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content
The Verge · Jan 17, 2023 · news