← Back to incidents
PhotoMath and Chegg AI Tools Provided Incorrect Solutions Leading to Student Misinformation
MediumAI-powered homework tools including PhotoMath and Chegg AI provided incorrect mathematical solutions to students, causing wrong submissions and misinformed learning processes.
Category
Hallucination
Industry
Education
Status
Reported
Date Occurred
Jan 1, 2024
Date Reported
Mar 15, 2024
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
operational
People Affected
100,000
Human Review in Place
No
Litigation Filed
No
homeworkmathematicseducationstudent_harmaccuracylearning_tools
Full Description
Multiple AI-powered homework assistance platforms, including PhotoMath and Chegg AI, were found to be providing incorrect solutions to mathematical and scientific problems throughout 2024, with the issues first becoming widely apparent in January 2024. These tools, which had gained widespread adoption among students for step-by-step problem solving guidance, began generating solutions containing computational errors, incorrect methodologies, and flawed reasoning processes that appeared superficially correct to users. The problems were reported by educators and identified through systematic analysis by March 15, 2024, revealing a pattern of consistent mathematical errors across multiple platforms.
The technical failures stemmed from fundamental limitations in the AI models' mathematical reasoning capabilities and training data quality. The systems demonstrated particular weaknesses in advanced mathematics, calculus, and physics problems, where they would produce plausible-looking step-by-step solutions that contained critical errors in mathematical logic or computational steps. The AI models appeared to prioritize generating coherent-looking explanations over mathematical accuracy, suggesting inadequate verification systems and insufficient training on rigorous mathematical problem-solving methodologies. The errors were often subtle enough to bypass basic automated checking systems but significant enough to render the solutions fundamentally incorrect.
An estimated 100,000 students were affected by these incorrect solutions, with educators across various institutions reporting patterns of similar wrong answers that traced back to AI homework tools. Students unknowingly submitted incorrect homework assignments and exam answers, while simultaneously learning faulty problem-solving approaches that could impact their long-term mathematical understanding. The incident created particular concern among mathematics and science educators who noted that students had developed dependency on these tools without acquiring the skills to verify AI-generated solutions independently. Academic institutions reported disruptions to grading processes and concerns about academic integrity as instructors struggled to distinguish between AI-generated errors and student misconceptions.
Both PhotoMath and Chegg acknowledged the accuracy issues in their AI systems and issued public statements committing to improvements in their mathematical problem-solving capabilities. The companies implemented enhanced verification protocols and began updating their AI models to address the identified computational weaknesses. Chegg specifically announced investments in additional quality assurance measures and human oversight for complex mathematical problems. However, neither company provided detailed timelines for complete resolution of the accuracy issues or comprehensive remediation plans for affected students.
The incident highlighted broader systemic risks in AI-powered educational technology and prompted discussions about regulatory oversight in the educational AI sector. Education technology experts and academic institutions began calling for standardized accuracy requirements and verification protocols for AI homework assistance tools. The widespread nature of the problem raised questions about the appropriate role of AI in education and the need for students to develop critical evaluation skills when using AI-generated content. Several educational institutions began updating their academic integrity policies to address the challenges posed by potentially inaccurate AI assistance tools.
Root Cause
AI models used by homework assistance platforms generated mathematically incorrect solutions due to training data limitations, insufficient mathematical reasoning capabilities, and lack of robust verification systems for computational accuracy.
Mitigation Analysis
Implementation of mathematical verification systems that cross-check AI solutions against known correct methods, human expert review of AI-generated solutions before publication, and integration with computer algebra systems for validation could have prevented these errors. Real-time accuracy monitoring and user feedback systems for flagging incorrect solutions would also reduce harm propagation.
Lessons Learned
AI systems require specialized validation mechanisms for mathematical accuracy, and educational AI tools need robust human oversight to prevent the propagation of incorrect learning materials. Students and educators must be trained to critically evaluate AI-generated solutions rather than accepting them as authoritative.
Sources
AI Homework Tools Raise Accuracy Concerns Among Educators
Inside Higher Ed · Mar 15, 2024 · news
AI Tutoring Apps Give Wrong Answers to Math Problems
Education Week · Feb 20, 2024 · news