← Back to incidents

Indian Political Parties Deploy AI Deepfakes in 2024 General Elections

High

During India's 2024 general elections, political parties extensively used AI deepfakes including videos of deceased politicians and fake opponent content, affecting nearly one billion eligible voters and prompting regulatory intervention by election authorities.

Category
Deepfake / Fraud
Industry
Government
Status
Resolved
Date Occurred
Apr 19, 2024
Date Reported
May 15, 2024
Jurisdiction
India
AI Provider
Other/Unknown
Application Type
other
Harm Type
reputational
People Affected
968,000,000
Human Review in Place
No
Litigation Filed
No
Regulatory Body
Election Commission of India
deepfakeselectionspoliticalindiademocracymisinformationAI manipulationelectoral integrity

Full Description

India's 2024 general elections, running from April 19 to June 1, 2024, witnessed unprecedented use of AI-generated deepfake content by major political parties including the Bharatiya Janata Party (BJP) and Indian National Congress. The BJP created deepfake videos featuring deceased party leaders like Bal Thackeray and former Prime Minister Atal Bihari Vajpayee appearing to endorse current candidates, while also producing AI-translated versions of Prime Minister Narendra Modi's speeches in regional languages he doesn't speak fluently. The Congress party responded with their own AI-generated content, including deepfake videos attempting to discredit BJP candidates and fabricated speeches attributed to opposition leaders. Regional parties across multiple states also deployed similar tactics, with an estimated 90% of political content on social media platforms containing some form of AI manipulation during the peak campaign period. The technology was primarily used for voice cloning, face swapping, and creating entirely synthetic campaign advertisements. The Election Commission of India initially struggled to address the phenomenon, as existing electoral laws had no specific provisions for AI-generated content. Social media platforms like Facebook, YouTube, and Twitter implemented limited detection measures, but the volume and sophistication of content overwhelmed automated systems. Independent fact-checking organizations documented hundreds of deepfake videos across platforms, with some receiving millions of views before identification and removal. The widespread deployment of AI deepfakes raised significant concerns about electoral integrity and informed democratic participation. Voters in rural areas, comprising over 65% of India's electorate, were particularly vulnerable to deception due to limited digital literacy. Post-election surveys indicated that a substantial portion of voters were unaware they had viewed AI-generated content, with many believing fabricated endorsements and statements influenced their voting decisions. In response to the crisis, the Election Commission issued emergency guidelines requiring political parties to declare AI-generated content and established a rapid response team for deepfake identification. However, these measures came late in the election cycle and had limited effectiveness. The incident highlighted critical gaps in India's electoral regulatory framework and prompted discussions about comprehensive AI governance legislation for future elections.

Root Cause

Political parties systematically deployed AI deepfake technology to create fabricated campaign content, including videos of deceased politicians endorsing candidates and fake translated speeches, without adequate disclosure or regulatory oversight during the election period.

Mitigation Analysis

Implementation of mandatory AI content disclosure requirements, real-time deepfake detection systems on social media platforms, and pre-publication review processes for political advertisements could have reduced the spread of synthetic media. Enhanced digital literacy campaigns and standardized authentication protocols for political content would provide additional protection against electoral manipulation.

Lessons Learned

The incident demonstrated that electoral systems require proactive AI governance frameworks before deployment rather than reactive measures during campaigns. It highlighted the particular vulnerability of diverse, multilingual democracies to AI manipulation and the need for platform-agnostic detection and disclosure standards.