Researchers Offer 'A Taxonomy of Tactics' for Generative AI Misuse

The positive potential of generative AI (GenAI) would be hard to overstate, which is why such a wide range of organizations, from healthcare and education to manufacturing to IT, are going live with GenAI initiatives, and virtually everyone is giving it a hard look. However, the adoption of this technology has also raised concerns about the risks posed by its misuse.

A group of researchers from Google DeepMind, Jigsaw, and set out to clarify those risks and provide "a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics employed to inflict harm," and they recently published their findings in a paper entitled, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data."

The authors of the paper, Nahema Marchal, Rachel Xu, Rasmi Elasmar, Iason Gabriel, Beth Goldberg, and William Isaac, emphasized that, as GenAI capabilities continue to advance, understanding the specific ways in which these tools are exploited is critical for developing effective safeguards. Their "taxonomy of GenAI misuse tactics" is meant to provide a framework for identifying and addressing the potential harms associated with these technologies, they wrote, ultimately aiming to ensure their responsible and ethical use.

The researchers based their study on the qualitative analysis of approximately 200 incidents reported between January 2023 and March 2024. That analysis revealed key patterns and motivations behind the misuse of GenAI, including:

Manipulation of Human Likeness
The most prevalent tactics involve the manipulation of human likeness, such as impersonation, "sockpuppeting," and "non-consensual intimate imagery."

Low-Tech Exploitation
Most misuse cases do not involve sophisticated technological attacks, but rather exploit easily accessible GenAI capabilities requiring minimal technical expertise.

Emergence of New Forms of Misuse
The availability and accessibility of GenAI tools have introduced new forms of misuse that, although not overtly malicious or policy-violative, have concerning ethical implications, such as blurring the lines between authenticity and deception in political outreach and self-promotion.

The study also identified two categories of misuse tactics:

Exploitation of GenAI Capabilities

  • Impersonation: Creating AI-generated audio or video to mimic real people.Appropriated Likeness: Using or altering a person's likeness without consent.
  • Sockpuppeting: Creating synthetic online personas.
  • NCII: Generating explicit content without consent.
  • Falsification: Fabricating evidence such as reports or documents.
  • IP Infringement: Using someone’s intellectual property without permission.
  • Counterfeit: Producing items that imitate original works and pass as real.
  • Scaling and Amplification: Automating and amplifying content distribution.
  • Targeting & Personalization: Refining outputs for targeted attacks.

Compromise of GenAI Systems

  • Adversarial Inputs: Modifying inputs to cause a model to malfunction.
  • Prompt Injections: Manipulating text instructions to produce harmful outputs.
  • Jailbreaking: Bypassing model restrictions and safety filters.
  • Model Diversion: Repurposing models for unintended uses.
  • Steganography: Hiding messages within model outputs.
  • Data Poisoning: Corrupting training datasets to introduce vulnerabilities.
  • Privacy Compromise: Revealing sensitive information from training data.
  • Data Exfiltration: Illicitly obtaining training data.
  • Model Extraction: Stealing model architecture and parameters.

The paper provides insights for policymakers, trust and safety teams, and researchers to help them develop strategies for AI governance and mitigate real-world harms, the authors wrote. They emphasized the need for better technical safeguards, non-technical user-facing interventions, and ongoing monitoring of the evolving misuse landscape to protect against the diverse and growing threats posed by GenAI.

About the Author

John K. Waters is the editor in chief of a number of sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at