← Back to incidents
Zoom Updated Terms of Service to Allow AI Training on User Content Without Explicit Consent
HighZoom faced major backlash after updating terms of service in March 2023 to allow AI training on user content including video calls without explicit consent, affecting hundreds of millions of users before partially reversing the policy in August 2023.
Category
Privacy Leak
Industry
Technology
Status
Resolved
Date Occurred
Mar 27, 2023
Date Reported
Aug 7, 2023
Jurisdiction
US
AI Provider
Other/Unknown
Application Type
other
Harm Type
privacy
People Affected
300,000,000
Human Review in Place
No
Litigation Filed
No
privacyterms_of_serviceai_training_datauser_consententerprise_softwaredata_governancezoom
Full Description
On March 27, 2023, Zoom quietly updated its Terms of Service to include Section 10.4, which granted the company broad rights to use customer content for training and tuning artificial intelligence and machine learning models. The updated language stated that Zoom could use 'Customer Content' including video recordings, audio, chat messages, and transcripts for 'product and service development' and 'artificial intelligence and machine learning' purposes. This change was implemented without prominent notification to users and applied retroactively to existing content.
The privacy implications became widely known in August 2023 when technology publications and privacy advocates began scrutinizing the terms. The language suggested that Zoom could potentially use recordings of confidential business meetings, personal conversations, therapy sessions, and educational content to train AI models. Given Zoom's massive user base of over 300 million meeting participants, the scope of potentially affected content was enormous. The terms did not provide clear opt-out mechanisms for users concerned about their data being used for AI training purposes.
Public outcry was immediate and severe, with major enterprise customers, privacy advocates, and technology commentators expressing alarm. High-profile users including government agencies, healthcare organizations, and educational institutions that rely on Zoom for sensitive communications raised concerns about compliance with privacy regulations like HIPAA, FERPA, and GDPR. The Electronic Frontier Foundation and other digital rights organizations criticized the broad language and retroactive application. Stock analysts noted potential customer churn risks as organizations considered switching to competitors.
Facing mounting pressure, Zoom's Chief Product Officer Smita Hashim published a blog post on August 7, 2023, attempting to clarify the company's position. The company stated that it would not use customer content to train AI models for general purposes without customer consent, and that audio, video, and chat content would not be used to train generative AI models unless customers specifically opted in. However, the original terms of service language remained largely unchanged, leading to continued confusion about actual data practices versus public statements.
The incident highlighted broader industry tensions around AI companies' data collection practices and the lack of granular consent mechanisms in enterprise software. While Zoom's clarifications addressed some immediate concerns, the episode demonstrated how quickly AI training data policies could create significant trust and compliance issues for platform companies. The company's handling of the situation, including the delayed and incomplete response to privacy concerns, raised questions about corporate governance of AI development practices and the adequacy of existing privacy frameworks for AI training scenarios.
Root Cause
Zoom's legal team updated terms of service language to broadly permit use of customer content for AI training and product development without implementing adequate notice mechanisms or opt-out procedures for users.
Mitigation Analysis
This incident could have been prevented through privacy-by-design principles including explicit opt-in consent mechanisms, granular data use controls, and transparent data governance policies. Implementing user dashboards showing what content is used for training and providing easy opt-out mechanisms would have addressed privacy concerns proactively.
Lessons Learned
The incident demonstrates the critical importance of transparent AI data governance policies and the business risks of implementing broad data collection terms without adequate user notice and consent mechanisms. It also highlighted the need for clear regulatory frameworks governing AI training data collection practices.
Sources
Zoom's Terms of Service Now Allow Training AI on User Content
Stack Diary · Aug 7, 2023 · news
Zoom's Terms of Service and AI: How We're Listening and What We're Doing
Zoom Blog · Aug 7, 2023 · company statement
Zoom backtracks on terms that seemed to let it train AI on calls
The Washington Post · Aug 8, 2023 · news