← Back to incidents

Samsung Semiconductor Employees Leaked Confidential Data Through ChatGPT Prompts

High

Samsung semiconductor division employees leaked confidential source code, meeting recordings, and test data through ChatGPT prompts in March 2023. Samsung banned ChatGPT usage and implemented new AI policies after discovering at least three separate incidents within 20 days.

Category
Privacy Leak
Industry
Technology
Status
Resolved
Date Occurred
Mar 1, 2023
Date Reported
May 2, 2023
Jurisdiction
International
AI Provider
OpenAI
Model
ChatGPT
Application Type
chatbot
Harm Type
privacy
Human Review in Place
No
Litigation Filed
No
data_leaktrade_secretsemployee_errorcorporate_policysemiconductorOpenAI_ChatGPTenterprise_security

Full Description

In March 2023, Samsung's semiconductor division experienced multiple data security breaches when employees used ChatGPT to assist with work-related tasks. The incidents occurred within Samsung's Device Solutions (DS) division, which handles semiconductor manufacturing and is critical to Samsung's competitive position in memory chips and processors. Three distinct incidents were documented within a 20-day period. In the first case, an employee copied source code related to Samsung's semiconductor testing programs and asked ChatGPT to optimize the code. The second incident involved an employee recording a meeting and transcribing the audio to text, then inputting the meeting notes into ChatGPT to create a summary. The third case saw an employee sharing internal hardware test sequences and asking ChatGPT to analyze potential issues with the test data. Samsung discovered these breaches through internal monitoring systems that flagged unusual data transfers. The company's security team traced the incidents back to individual employees who were unaware that their ChatGPT conversations were being stored and potentially used by OpenAI for model training. The leaked information included proprietary algorithms, strategic discussions about future product development, and detailed technical specifications that could benefit competitors. Upon discovering the breaches, Samsung immediately implemented an enterprise-wide ban on ChatGPT and similar generative AI tools. The company also launched an internal investigation to assess the full scope of potential data exposure and began developing comprehensive AI usage guidelines. Samsung's response highlighted the broader challenge facing enterprises as employees increasingly use consumer AI tools without proper security oversight. The incident prompted Samsung to establish new policies requiring approval for any AI tool usage and to explore partnerships with enterprise AI providers that offer better data protection guarantees. The company also initiated mandatory training programs to educate employees about data security risks associated with generative AI platforms.

Root Cause

Employees inadvertently shared sensitive corporate data by copying and pasting confidential information into ChatGPT prompts without understanding that OpenAI retains and potentially uses conversation data for training purposes.

Mitigation Analysis

Data loss prevention (DLP) tools monitoring copy-paste operations to external services could have blocked the transfers. Clear AI usage policies with technical controls preventing access to generative AI from corporate networks, and employee training on data handling protocols with AI tools would have prevented these leaks. Additionally, deploying private AI instances or API integrations with data retention controls rather than consumer chatbot interfaces could maintain functionality while protecting sensitive information.

Lessons Learned

The incident demonstrates that well-intentioned employees can inadvertently create significant security vulnerabilities when using consumer AI tools with corporate data. Organizations need proactive technical controls and clear policies before AI adoption, not reactive bans after breaches occur.