← Back to incidents

CNET Published AI-Generated Articles Containing Factual Errors

Medium

CNET quietly published dozens of AI-generated financial explainer articles under the byline "CNET Money Staff" without disclosing the use of AI. Journalists and readers discovered that multiple articles contained factual errors, including incorrect explanations of basic financial concepts like compound interest.

Category
Hallucination
Industry
Media
Status
Resolved
Date Occurred
Nov 1, 2022
Date Reported
Jan 12, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Model
Unknown
Application Type
api integration
Harm Type
reputational
Human Review in Place
Yes
Litigation Filed
No

Full Description

Starting in November 2022, CNET began quietly using an AI content generation tool to produce financial explainer articles covering topics such as savings accounts, certificate of deposit rates, and mortgage fundamentals. These articles were published under the generic byline "CNET Money Staff" without any disclosure to readers that artificial intelligence had been used in their creation. The practice continued for approximately two months before being discovered by external journalists in January 2023. The AI system used by CNET generated articles that contained multiple factual errors about basic financial concepts. Most significantly, an article explaining compound interest included a mathematical error that fundamentally misrepresented how compounding calculations work—stating that a $10,000 deposit earning 3% annual interest compounded annually would grow to $10,300 after one year and $10,609 after two years, when the correct second-year amount should be $10,609. Additional articles contained errors in explanations of CD laddering strategies and other financial planning concepts that readers might rely upon for personal financial decisions. The incident damaged CNET's editorial credibility and reader trust, particularly as financial advice content requires high accuracy standards given its potential impact on readers' financial well-being. The discovery sparked significant criticism from journalism ethics experts and financial literacy advocates who noted that errors in financial education content could lead to poor financial decisions by consumers. The incident also raised questions about CNET's editorial oversight processes and whether human editors were adequately reviewing AI-generated content before publication. Following the January 12, 2023 investigation by Futurism reporter Jon Christian that exposed the practice, CNET initially defended its use of AI tools while claiming that all articles underwent human editorial review. However, the company subsequently paused its AI content generation program, added disclosure labels to existing AI-written articles identifying them as AI-generated, and issued corrections for the factual errors. CNET editor-in-chief Connie Guglielmo stated that the company would implement additional review processes before potentially resuming AI-assisted content creation. The incident became a significant case study in the journalism industry regarding transparency requirements for AI-generated content and the limitations of AI systems in producing factually accurate information. It demonstrated that AI hallucinations and errors could extend beyond conversational AI applications into published editorial content, potentially affecting thousands of readers who might not know they were consuming AI-generated material. The case contributed to ongoing industry discussions about mandatory AI disclosure requirements and the need for enhanced editorial review processes when incorporating AI tools into content production workflows.

Root Cause

CNET used an AI tool to generate financial explainer articles under the byline "CNET Money Staff." The articles were published with minimal human editorial review. Readers and journalists discovered that the AI-generated articles contained basic factual errors, including incorrect explanations of how compound interest works.

Mitigation Analysis

Provenance tracking would have enabled CNET to clearly identify which articles were AI-generated versus human-written, enabling targeted quality review. An audit trail linking each published article to its generation method, review status, and editorial approval chain would have supported both internal quality control and external transparency. This case demonstrates that even seemingly low-risk content generation (financial explainers) requires robust quality controls when AI is involved.

Lessons Learned

AI-generated content requires explicit disclosure and rigorous fact-checking, even for seemingly straightforward explainer content. Human editorial review is not a sufficient safeguard when reviewers are primed to trust AI output. Volume amplifies risk — publishing AI content at scale requires scaled quality controls.

Sources