← Back to incidents

WormGPT and FraudGPT Criminal AI Tools Sold on Dark Web for Cybercrime

High

Criminal AI tools WormGPT and FraudGPT were discovered being sold on dark web forums in 2023, specifically designed to help cybercriminals create phishing emails, malware, and social engineering attacks without safety restrictions.

Category
Safety Failure
Industry
Technology
Status
Ongoing
Date Occurred
Jul 1, 2023
Date Reported
Jul 12, 2023
Jurisdiction
International
AI Provider
Other/Unknown
Model
WormGPT, FraudGPT
Application Type
api integration
Harm Type
financial
Human Review in Place
No
Litigation Filed
No
cybercrimedark_webphishingmalwaresocial_engineeringcriminal_aifraud

Full Description

In July 2023, cybersecurity researchers discovered multiple criminal AI tools being actively marketed and sold on dark web forums, with WormGPT and FraudGPT being the most prominent examples. These tools were specifically designed and marketed as alternatives to mainstream AI models like ChatGPT, but without any safety guardrails or ethical restrictions that would prevent their use for criminal activities. WormGPT was advertised as an AI tool trained on diverse data sources focused on malware creation, with the explicit promise that it had no ethical boundaries or limitations. The tool was marketed to cybercriminals for creating convincing phishing emails, generating malware code, and developing sophisticated social engineering attacks. FraudGPT offered similar capabilities, specifically targeting business email compromise (BEC) attacks, credit card fraud schemes, and other financial crimes. The tools were sold through subscription models on dark web marketplaces, with prices ranging from $60 to $200 per month. Vendors provided demonstrations showing the tools generating convincing phishing emails that could bypass traditional spam filters, creating personalized social engineering content, and producing code for various types of malware. The criminal AI tools were promoted as being more effective than traditional methods because they could generate unique content that would be less likely to be detected by security systems. Security researchers who analyzed samples of content generated by these tools found that they could produce highly convincing phishing emails with sophisticated social engineering techniques, including business email compromise attacks targeting specific companies and individuals. The tools demonstrated the ability to create personalized attacks based on publicly available information about targets, making them particularly dangerous for targeted cybercrime campaigns. The discovery highlighted broader concerns about the democratization of AI technology for criminal purposes and the challenges of preventing misuse of large language models. Unlike legitimate AI safety failures, these tools represented intentional criminal applications of AI technology, suggesting that bad actors were actively working to circumvent safety measures implemented by mainstream AI companies.

Root Cause

Criminal actors developed and deployed AI models specifically designed without safety guardrails, trained or fine-tuned to assist with illegal activities including fraud, phishing, and malware creation.

Mitigation Analysis

This incident highlights the need for robust AI safety frameworks including model access controls, content filtering, and responsible disclosure practices. Proactive monitoring of dark web marketplaces and collaboration between AI companies and law enforcement could help identify misuse patterns. Implementation of model watermarking and usage tracking could help trace criminal applications.

Lessons Learned

This incident demonstrates the inevitability of criminal misuse of AI technology and the need for proactive measures to combat malicious AI applications. It highlights the importance of international cooperation in addressing AI-enabled cybercrime and the challenges of enforcing AI safety in decentralized, anonymous environments.