← Back to incidents
AI Voice Synthesis Robocalls Target Elderly Voters with Election Misinformation
HighAI-generated robocalls mimicking President Biden's voice targeted 5,000 elderly New Hampshire voters with false election information. The FCC responded by banning AI voices in robocalls and fining perpetrators $6 million.
Category
Deepfake / Fraud
Industry
Government
Status
Litigation Pending
Date Occurred
Jan 21, 2024
Date Reported
Jan 22, 2024
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
legal
People Affected
5,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
Federal Communications Commission
Fine Amount
$6,000,000
deepfakevoice_cloningvoter_suppressionelderly_targetingrobocallselection_interferenceFCC_ruling
Full Description
On January 21, 2024, approximately 5,000 New Hampshire residents received robocalls featuring an AI-generated voice clone of President Joe Biden discouraging them from voting in the state's Democratic primary. The calls, which lasted about 20 seconds, told recipients to 'save your vote for the November election' and falsely claimed that voting in the primary would prevent participation in the general election. The synthetic voice convincingly replicated Biden's distinctive speaking patterns and vocal characteristics.
The calls were traced to political operative Steven Kramer, who hired magician and voice artist Paul Carpenter to create the Biden voice clone using AI technology. Kramer then contracted with Life Corporation, a Texas-based company, to distribute the robocalls through Lingo Telecom. The operation specifically targeted elderly voters, exploiting their higher likelihood to answer unknown calls and potential difficulty in detecting AI-generated content.
The Federal Communications Commission responded swiftly to the incident, unanimously ruling on February 8, 2024, that AI-generated voices in robocalls are illegal under the Telephone Consumer Protection Act. This marked the first time the FCC explicitly banned AI-generated voices in unsolicited calls. The commission stated that AI-generated voices constitute 'artificial or prerecorded voices' requiring prior consent, effectively outlawing their use in most robocall scenarios.
Criminal charges were subsequently filed against Kramer in May 2024, including 13 felony charges for voter suppression and 13 misdemeanor charges for caller ID spoofing. The calls had spoofed the caller ID to display the name and number of Kathy Sullivan, former New Hampshire Democratic Party chair. The FCC also proposed a $6 million fine against Lingo Telecom for transmitting the illegal robocalls, marking one of the largest penalties for robocall violations.
The incident highlighted the vulnerability of democratic processes to AI-powered disinformation campaigns, particularly those targeting elderly populations who may be less equipped to identify synthetic media. State election officials reported confusion among voters who received the calls, with some believing the messages were legitimate communications from the Biden campaign. The timing of the calls, just before the primary election, maximized their potential impact on voter behavior and turnout.
Root Cause
AI voice synthesis technology was used to clone President Biden's voice and deploy mass robocalls without disclosure of artificial generation, exploiting elderly voters' trust in familiar voices and their difficulty detecting deepfakes.
Mitigation Analysis
Mandatory AI disclosure requirements in robocalls, voice authentication systems, and caller ID verification could have prevented this incident. Enhanced detection capabilities for synthetic voices in telecom infrastructure and stricter penalties for political deepfakes would deter similar attacks. Real-time monitoring of robocall content during election periods is essential.
Litigation Outcome
Criminal charges filed against political operative Steven Kramer for voter suppression and spoofing violations
Lessons Learned
This incident demonstrates the urgent need for comprehensive regulations governing AI use in political communications and the particular vulnerability of elderly populations to sophisticated deepfake attacks. The rapid regulatory response shows that existing telecommunications law can be adapted to address AI threats, but enforcement mechanisms require strengthening.
Sources
Fake Biden robocalls urge voters to skip New Hampshire primary
CNN · Jan 22, 2024 · news
FCC Makes AI-Generated Voices in Robocalls Illegal
Federal Communications Commission · Feb 8, 2024 · regulatory action
Political operative charged in fake Biden robocall scheme targeting New Hampshire voters
Washington Post · May 30, 2024 · news
FCC proposes $6 million fine for company that transmitted AI Biden robocalls
Reuters · Aug 6, 2024 · news