← Back to incidents
Midjourney V7 Generated Photorealistic Images That Fooled Professional Fact-Checkers
HighMidjourney V7's photorealistic capabilities fooled professional fact-checkers at major news outlets, leading to widespread publication of AI-generated images as authentic photos. The incident highlighted critical gaps in media verification processes and sparked regulatory investigations into synthetic media disclosure requirements.
Category
Deepfake / Fraud
Industry
Media
Status
Under Investigation
Date Occurred
Jan 15, 2025
Date Reported
Jan 20, 2025
Jurisdiction
International
AI Provider
Other/Unknown
Model
Midjourney V7
Application Type
other
Harm Type
reputational
Estimated Cost
$2,500,000
People Affected
15,000,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
European Commission Digital Services Act Team
synthetic_mediafact_checkingphotorealismmedia_verificationdeepfakesjournalismmisinformation
Full Description
On January 15, 2025, several major international news organizations, including Reuters subsidiary outlets and regional European newspapers, published what they believed were authentic photographs depicting a purported protest in Eastern Europe. The images, showing crowds of demonstrators with remarkably detailed facial features and realistic lighting conditions, were later revealed to have been generated using Midjourney's newly released V7 model. The synthetic images had been circulated on social media platforms and submitted to news outlets by accounts claiming to be on-the-ground sources.
Fact-checking teams at three major news organizations, including BBC Verify and AFP's digital investigation unit, initially authenticated the images using their standard verification protocols. The AI-generated photos passed traditional detection methods, including reverse image searches, metadata analysis, and visual inspection for common AI artifacts such as inconsistent shadows, anatomical anomalies, and repetitive patterns. Midjourney V7's enhanced capabilities had effectively eliminated these telltale signs, incorporating sophisticated understanding of photographic principles including depth of field, natural skin textures, and contextually appropriate environmental details.
The deception was discovered approximately 48 hours later when Bellingcat researchers, using newly developed detection algorithms specifically designed for advanced AI models, identified subtle mathematical patterns in the pixel arrangements that were inconsistent with camera sensor noise. By this time, the fabricated images had been viewed by an estimated 15 million people across various platforms and had influenced several editorial decisions regarding coverage of the alleged events. The incident prompted immediate retractions from affected news outlets and sparked internal investigations into their image verification procedures.
The broader implications extended beyond individual news organizations to fundamental questions about visual evidence in the digital age. Insurance claims related to reputational damage from the false reporting are estimated to exceed $2.5 million across affected media companies. The European Commission's Digital Services Act enforcement team launched a formal investigation into whether Midjourney's lack of embedded watermarks or detection metadata constituted a violation of emerging synthetic media disclosure requirements. Legal actions are pending in multiple jurisdictions, with news outlets exploring claims against both the platform and the individuals who submitted the fabricated images.
Root Cause
Midjourney V7's enhanced photorealism capabilities eliminated traditional detection markers like inconsistent lighting, anatomical errors, and texture artifacts that fact-checkers previously relied upon for identifying AI-generated content. The model's improved understanding of photography fundamentals made synthetic images virtually indistinguishable from authentic photographs.
Mitigation Analysis
Mandatory watermarking or cryptographic provenance tracking embedded in AI-generated images could have prevented misattribution. Enhanced reverse image search capabilities specifically trained to detect V7 artifacts, combined with requirement for news outlets to verify image sources through blockchain-based authenticity certificates, would significantly reduce the risk of synthetic media being published as genuine.
Lessons Learned
The incident demonstrates that traditional fact-checking methodologies are inadequate for detecting sophisticated AI-generated content. Media organizations must invest in specialized detection tools and establish partnerships with technical experts to maintain verification capabilities as synthetic media technology advances.
Sources
Major News Outlets Fooled by AI-Generated Images from Midjourney V7
Reuters · Jan 20, 2025 · news
Detecting Midjourney V7: New Challenges for Visual Verification
Bellingcat · Jan 21, 2025 · news