← Back to incidents
Meta's Galactica Scientific AI Shut Down After Generating Fake Research Papers and Biased Content
HighMeta's Galactica AI for scientific text generation was shut down after just 3 days when it began confidently producing fake research papers, fabricated citations, and biased scientific content, raising serious concerns about AI-generated misinformation in academic contexts.
Category
Hallucination
Industry
Technology
Status
Resolved
Date Occurred
Nov 15, 2022
Date Reported
Nov 17, 2022
Jurisdiction
International
AI Provider
Meta
Model
Galactica
Application Type
api integration
Harm Type
reputational
Estimated Cost
$50,000,000
Human Review in Place
No
Litigation Filed
No
scientific_misinformationhallucinationacademic_integritymodel_withdrawalbiasMetalarge_language_model
Full Description
In November 2022, Meta released Galactica, a large language model specifically designed to assist with scientific writing and research. The 120-billion parameter model was trained on over 48 million scientific papers, reference materials, and academic databases, with the goal of helping researchers generate scientific text, citations, and summaries. Meta positioned Galactica as a breakthrough tool that could democratize scientific writing and accelerate research processes.
Within hours of its public release on November 15, 2022, researchers and scientists began testing Galactica's capabilities and quickly discovered serious flaws. The AI confidently generated entirely fabricated scientific papers that appeared legitimate but contained false information. Users found that Galactica would create convincing-sounding research on topics ranging from quantum physics to medical treatments, complete with fake citations to non-existent papers and authors. The model presented these fabrications with the same confidence level as factual content, making it difficult for non-experts to distinguish between real and fake information.
Particularly concerning was Galactica's tendency to produce biased and potentially harmful content when prompted with topics related to race, religion, and social issues in scientific contexts. Researchers documented instances where the AI generated pseudo-scientific claims that reinforced harmful stereotypes or promoted discredited theories. The model also demonstrated a propensity to generate content that appeared to validate conspiracy theories or fringe scientific viewpoints when framed in academic language.
The scientific community's reaction was swift and overwhelmingly negative. Prominent researchers and institutions criticized Meta for releasing a tool that could undermine scientific integrity and flood academic discourse with misinformation. Critics argued that the model's authoritative presentation of false information could be particularly dangerous in fields like medicine or climate science, where inaccurate information could have real-world consequences. Social media platforms saw widespread criticism from academics who demonstrated the model's failures and called for its immediate withdrawal.
Facing mounting pressure and recognizing the severity of the issues, Meta made the decision to shut down public access to Galactica on November 17, 2022, just three days after its launch. The company acknowledged the problems with the model and stated that it would continue development internally. Meta's Chief AI Scientist Yann LeCun defended the research but admitted that the public release was premature and that additional safeguards were needed before any future deployment.
Root Cause
The model was trained on scientific literature but lacked verification mechanisms to distinguish between factual and fabricated content. It confidently generated plausible-sounding but false scientific claims, citations to non-existent papers, and biased content without appropriate safeguards or disclaimers.
Mitigation Analysis
Implementation of fact-checking mechanisms, citation verification systems, and human expert review before publication could have prevented this incident. Additionally, clear disclaimers about the experimental nature of generated content and restrictions on direct publication of AI-generated scientific claims would have reduced the risk of misinformation spread.
Lessons Learned
The Galactica incident highlighted the critical importance of extensive testing and safeguards before releasing AI models in sensitive domains like scientific research. It demonstrated that technical capability alone is insufficient without robust verification mechanisms and appropriate human oversight, particularly in fields where accuracy and trustworthiness are paramount.
Sources
Meta shuts down AI system after it writes fake scientific papers
The Verge · Nov 17, 2022 · news
Meta's Galactica AI can write scientific papers — but scientists are worried
Nature · Nov 17, 2022 · news