← Back to incidents
AI-Generated Robocalls Impersonating Biden Discourage New Hampshire Primary Voting
HighAI-generated robocalls mimicking President Biden's voice were sent to New Hampshire voters before the 2024 primary, discouraging voting participation and prompting FCC enforcement action.
Category
Deepfake / Fraud
Industry
Government
Status
Ongoing
Date Occurred
Jan 21, 2024
Date Reported
Jan 22, 2024
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
legal
People Affected
5,000
Human Review in Place
No
Litigation Filed
Yes
Litigation Status
pending
Regulatory Body
Federal Communications Commission
Fine Amount
$6,000,000
voice_cloningelection_interferencerobocallspolitical_manipulationvoter_suppressiondeepfakeFCC_enforcement
Full Description
On January 21, 2024, approximately 5,000 New Hampshire voters received robocalls featuring an AI-generated voice clone of President Joe Biden discouraging them from voting in the upcoming Democratic primary. The calls, made just days before the January 23 primary election, used sophisticated voice synthesis technology to create a convincing impersonation of the President's distinctive speaking style and mannerisms.
The robocalls contained a message stating 'What a bunch of malarkey' and suggested that voting in the primary would prevent voters from participating in the November general election, which is factually incorrect. The calls were traced to political consultant Steven Kramer, who was working with businessman Dean Phillips' presidential campaign, though Phillips' campaign denied involvement and fired Kramer upon learning of the incident.
The Federal Communications Commission launched an immediate investigation after complaints flooded in from voters and election officials. New Hampshire Attorney General John Formella also opened a criminal investigation into potential voter suppression. The incident prompted widespread concern about the use of AI technology to interfere with democratic processes and the potential for deepfake audio to deceive voters at scale.
In response to the incident, the FCC unanimously voted to declare AI-generated voice robocalls illegal under existing robocall regulations, effective immediately. The commission determined that such calls violate the Telephone Consumer Protection Act when made without explicit consent. Subsequently, the FCC imposed a $6 million fine on Steven Kramer and the entities involved in creating and distributing the deceptive calls.
The incident marked one of the first major documented cases of AI voice cloning being used for election interference in the United States. It highlighted the vulnerability of electoral systems to emerging AI technologies and the need for updated regulatory frameworks. The case also demonstrated how quickly AI-generated content could be produced and distributed through existing telecommunications infrastructure.
Beyond regulatory action, the incident prompted broader discussions about authentication mechanisms for political communications, the role of telecommunications providers in detecting AI-generated content, and the need for voter education about deepfake technology. Several states subsequently introduced legislation requiring disclosure when AI is used in political advertisements or communications.
Root Cause
Political consultant Steven Kramer used AI voice cloning technology to create deepfake audio impersonating President Biden, then distributed it via robocall systems without disclosure or consent.
Mitigation Analysis
Mandatory real-time deepfake detection on telecommunications networks, required AI disclosure labels for political communications, and pre-deployment human review of all automated political messaging could have prevented this incident. Telecom providers need AI detection capabilities and regulatory frameworks must require explicit consent and disclosure for AI-generated political content.
Lessons Learned
This incident demonstrates the urgent need for proactive regulation of AI-generated content in political communications, robust detection capabilities in telecommunications infrastructure, and clear legal frameworks that can quickly adapt to emerging AI threats to democratic processes.
Sources
FCC Makes AI-Generated Voices in Robocalls Illegal
Federal Communications Commission · Feb 8, 2024 · regulatory action
Fake Biden robocalls telling people not to vote reach New Hampshire ahead of primary
Washington Post · Jan 22, 2024 · news