← Back to incidents

DeviantArt DreamUp AI Trained on Artist Works Without Consent

High

DeviantArt launched DreamUp AI tool using Stable Diffusion trained on artist works without consent, sparking massive community backlash and demands for opt-out mechanisms from affected creators.

Category
Copyright Violation
Industry
Media
Status
Ongoing
Date Occurred
Nov 9, 2022
Date Reported
Nov 9, 2022
Jurisdiction
International
AI Provider
Other/Unknown
Model
Stable Diffusion
Application Type
embedded
Harm Type
legal
People Affected
50,000,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
copyrighttraining_dataartist_rightsstable_diffusionopt_outcreative_industryintellectual_property

Full Description

On November 9, 2022, DeviantArt announced the launch of DreamUp, an AI-powered image generator integrated into their platform and based on the Stable Diffusion model. The announcement immediately triggered fierce backlash from the artist community when it became clear that the underlying AI model had been trained on millions of artworks from DeviantArt and other platforms without explicit artist consent. The Stable Diffusion model used LAION-5B dataset containing over 5 billion image-text pairs scraped from across the internet, including substantial portions of DeviantArt's catalog spanning two decades of user-generated content. The controversy intensified as artists discovered they could generate images in specific artists' styles by simply prompting the AI with artist names, effectively allowing users to create derivative works that mimicked established creators' distinctive techniques and aesthetics. High-profile artists including Greg Rutkowski, whose name became one of the most popular prompts, found their artistic styles being replicated without permission or compensation. The situation was particularly problematic given DeviantArt's position as a trusted platform where artists had uploaded their works expecting protection of their intellectual property rights. Within hours of the announcement, thousands of artists began protesting through social media campaigns, petition drives, and threats to leave the platform entirely. A Change.org petition titled 'DeviantArt: Stop AI Art Generation Unless Explicit Opt-in Consent Given' rapidly gathered over 10,000 signatures. Artists argued that using their works to train commercial AI systems without consent violated both copyright law and basic ethical principles of creative attribution. Many expressed concerns that AI-generated art would devalue human creativity and potentially eliminate career opportunities for emerging artists. Faced with overwhelming community opposition and potential legal challenges, DeviantArt initially attempted damage control by implementing an opt-out system called 'noai' tags and later DreamUp's opt-out feature. However, this retroactive approach was criticized as insufficient since the training had already occurred and the damage was done. The company also faced technical challenges in implementing effective opt-out mechanisms, as removing specific works from already-trained models proved practically impossible. The incident highlighted fundamental tensions between technological innovation and artist rights, contributing to broader legal challenges against AI companies and ongoing debates about fair use in machine learning training datasets.

Root Cause

DeviantArt integrated Stable Diffusion AI model that was trained on LAION dataset containing billions of images scraped from the internet, including DeviantArt artworks, without obtaining explicit consent from artists whose works were included in the training data.

Mitigation Analysis

The incident could have been prevented through explicit opt-in consent mechanisms before including works in training datasets, proactive artist notification systems, and content provenance tracking to identify copyrighted material. Post-incident, implementing robust opt-out systems and artist compensation mechanisms could reduce ongoing harm.

Litigation Outcome

Class action lawsuits filed against Stability AI and other companies for copyright infringement in training data usage

Lessons Learned

The incident demonstrates the critical importance of obtaining explicit consent before using copyrighted creative works in AI training datasets, and highlights the inadequacy of retroactive opt-out mechanisms once training has occurred.

Sources

The End of Art: An Argument Against Image AIs
ArtStation · Dec 6, 2022 · company statement