News
Inside OpenAI's New Report: How AI Is Fueling—and Fighting—Digital Threats
OpenAI has released a new report detailing how its security teams are identifying and countering the malicious use of AI models. The report, Disrupting Malicious Uses of AI: June 2025, outlines threats including those involving cloud platforms and a spectrum of cyber and social exploits.
The company is leveraging its own AI models, combined with human oversight, to disrupt a growing number of adversarial campaigns. These include:
- Cyber operations targeting cloud infrastructure and software.
- AI-assisted social engineering and scams at scale.
- Influence operations using AI-generated posts on platforms like X, TikTok, Telegram, and Facebook.
The report documents ten case studies where OpenAI banned accounts and shared data with industry and government partners to reinforce collective security measures.
LLM-Fueled Job Scams
One case involves a North Korea-linked employment scam where actors used ChatGPT to fabricate convincing résumés and simulate job interviews. Key tactics included:
- Automated résumé generation using looping scripts, aligning with specific job roles and industries. (LLM Supported Social Engineering)
- Model-assisted answers to technical questions and interviews based on uploaded résumés. (LLM Supported Social Engineering)
- Advising on geolocation masking to spoof corporate laptop configurations. (LLM-Enhanced Anomaly Detection Evasion)
- Assistance in writing scripts to automate mouse movement and device activity. (LLM Aided Development)
Cloud-Centric Threat Activity
Several malicious campaigns detailed in the report demonstrate how actors exploit cloud infrastructure to amplify their attacks:
- Operation ScopeCreep: A Russian-speaking group used ChatGPT in the iterative development of Windows malware. The malware was distributed via GitHub and controlled through Telegram channels.
- KEYHOLE PANDA & VIXEN PANDA: Linked to China, these groups used AI for penetration testing, reconnaissance, and credential theft, targeting U.S. defense networks.
Uncle Spam: A Chinese operation that deployed AI to generate divisive political content on X and Bluesky.
- Wrong Number: A campaign based in Cambodia that used multilingual AI-generated messages in SMS and messaging apps to lure victims into crypto scams.
Defensive AI in Practice
OpenAI describes its use of AI as a "force multiplier" for investigations, providing visibility into malicious workflows and accelerating detection efforts. According to the report, AI investigations are evolving with each disrupted campaign, offering deeper insight into threat actor behavior.
"Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses," the report says.
OpenAI emphasizes the importance of inter-industry collaboration and reminds readers that while AI can augment defenses, it remains one part of the broader cybersecurity landscape.
For platform engineers, cloud architects, and security professionals, the report highlights how AI is being weaponized—and how it can also be a powerful tool for defending against modern threats.
About the Author
David Ramel is an editor and writer at Converge 360.