← Back to incidents

AI Obituary Generators Created Fake Death Notices for Living People

High

AI-powered content generators created fake obituaries for living people that appeared in search results and on funeral home websites, causing reputational harm and emotional distress with limited recourse for removal.

Category
Defamation
Industry
Media
Status
Ongoing
Date Occurred
Jan 1, 2023
Date Reported
Aug 15, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
reputational
People Affected
50
Human Review in Place
No
Litigation Filed
No
obituariesdefamationcontent_generationsearch_resultsverificationtakedownmisinformation

Full Description

Throughout 2023 and into 2024, artificial intelligence content generation tools were exploited to create convincing but entirely fabricated obituaries for living individuals. These AI-generated death notices appeared prominently in Google search results and were published on various funeral home and obituary websites that had integrated automated content generation systems without adequate human oversight. The fake obituaries typically contained realistic biographical details, often scraped from public sources like LinkedIn profiles or social media accounts, making them appear credible to readers. The AI systems generated plausible family relationships, career accomplishments, and cause of death information that had no basis in reality. Some funeral homes had implemented AI tools to help generate obituary content more efficiently, but failed to implement verification systems to confirm the accuracy of the information being published. Victims of these fake obituaries reported significant emotional distress upon discovering their own death notices online. Family members and friends contacted victims in confusion and concern after finding the false obituaries through routine internet searches. The reputational impact was particularly severe for professionals whose fake obituaries appeared in the first page of search results for their names. Some individuals reported impacts on business relationships and employment opportunities as colleagues and clients encountered the false death notices. Removing the fake obituaries proved extremely challenging for victims. Search engines were slow to respond to removal requests, and many of the publishing websites had limited customer service infrastructure to handle takedown requests. Even when individual obituaries were removed, they often remained cached in search results or had been republished across multiple sites. The distributed nature of online obituary publishing made comprehensive removal nearly impossible, leaving some victims with persistent false death notices appearing in searches of their names for months after the initial publication.

Root Cause

AI content generation tools created realistic but false obituaries when prompted, without verification mechanisms to confirm deaths or prevent publication of fabricated death notices for living people.

Mitigation Analysis

Human editorial oversight before publishing obituaries could have prevented this harm. Content verification systems requiring proof of death certificates or funeral home validation would have blocked false notices. Search engines implementing stronger spam detection for obituary content and faster removal processes for reported fake death notices could have reduced exposure.

Lessons Learned

This incident demonstrates the critical need for human verification in AI-generated content that makes factual claims about real people. It highlights the particular harm that can result when AI systems generate false information about sensitive life events, and the inadequacy of current content removal mechanisms for addressing AI-generated misinformation.