← Back to incidents

Google Bard Demo Error Wipes $100B from Alphabet Market Cap

Critical

Google Bard made a factual error in its public launch demo, incorrectly claiming the James Webb Space Telescope took the first pictures of exoplanets. The error was spotted by astronomers on social media. Alphabet stock dropped 7.7% the following day, erasing approximately $100 billion in market capitalization.

Category
Hallucination
Industry
Technology
Status
Resolved
Date Occurred
Feb 6, 2023
Date Reported
Feb 8, 2023
Jurisdiction
US
AI Provider
Google
Model
Bard (LaMDA)
Application Type
chatbot
Harm Type
financial
Estimated Cost
$100,000,000,000
Human Review in Place
No
Litigation Filed
No
product_launchmarket_impactfactual_error

Full Description

On February 6, 2023, Google publicly released a promotional video demonstrating Bard, its AI chatbot competitor to OpenAI's ChatGPT, as part of a high-profile product launch campaign. In the demonstration video, Bard was asked what new discoveries from the James Webb Space Telescope could be shared with a 9-year-old child. The AI system responded that the James Webb Space Telescope "took the very first pictures of a planet outside of our own solar system," a factually incorrect statement that was immediately visible to millions of viewers. The error occurred during a critical moment when Google was attempting to reassert its leadership in AI technology following the rapid success of ChatGPT. The technical failure involved Google's Bard system, which is built on the company's LaMDA (Language Model for Dialogue Applications) large language model architecture. The system generated a confident but completely inaccurate response about astronomical achievements, demonstrating a classic AI hallucination where the model presented false information as fact. The correct information is that the first confirmed image of an exoplanet was captured by the European Southern Observatory's Very Large Telescope (VLT) in 2004, nearly two decades before the James Webb Space Telescope became operational. This error highlighted fundamental challenges with large language models' tendency to generate plausible-sounding but factually incorrect information, particularly problematic given the high-stakes nature of the public demonstration. The financial impact was immediate and severe, with Alphabet's stock price falling 7.7% on February 8, 2023, the trading day following widespread coverage of the error. This decline erased approximately $100 billion from Alphabet's market capitalization in a single day, representing one of the largest single-day value losses in corporate history. Beyond the immediate financial damage, the incident significantly damaged Google's reputation as a leader in AI technology and raised questions about the company's quality assurance processes. The error was quickly identified and widely circulated by astronomers and science communicators on social media platforms, amplifying the reputational damage and creating negative publicity precisely when Google needed to demonstrate AI superiority. Google's response to the incident was notably muted, with the company acknowledging the error but providing limited public commentary about the specific failure or remediation steps. The company did not immediately retract or correct the promotional video, and executives made only brief statements about the importance of rigorous testing in AI development. Internal reports suggested that the error may have stemmed from insufficient fact-checking protocols for the demonstration content, though Google did not provide detailed technical explanations for the hallucination. The incident prompted immediate internal reviews of Bard's training and deployment processes, though specific technical modifications were not publicly disclosed. The Bard demonstration error became a watershed moment in the AI industry, illustrating the high stakes and potential consequences of AI system failures in competitive commercial environments. The incident occurred during a period of intense competition between major tech companies to deploy generative AI capabilities, with Microsoft having recently announced its integration of OpenAI's technology into Bing search. Industry analysts noted that the error highlighted the tension between rapid deployment to maintain competitive advantage and the need for comprehensive testing and validation of AI system outputs. The incident contributed to broader discussions about AI safety, the reliability of large language models, and the appropriate standards for deploying AI systems in consumer-facing applications, influencing subsequent development and deployment strategies across the technology sector.

Root Cause

In a promotional demo video, Google Bard incorrectly stated that the James Webb Space Telescope took the first pictures of exoplanets outside our solar system. The first exoplanet image was actually captured by the VLT in 2004. The error was not caught before the demo was published.

Mitigation Analysis

A fact-verification step on AI-generated promotional content would have caught this. More broadly, provenance tracking linking the demo output to the model version and prompt would have enabled rapid triage. The incident illustrates that even in marketing contexts, AI output verification is critical when the stakes involve market-moving events.

Lessons Learned

AI product demos require the same fact-checking rigor as any public statement. Market consequences of AI errors extend beyond the direct harm. Speed-to-market pressure does not reduce the cost of public errors.