News

X's New Terms Make AI Prompts, Outputs Explicitly the User's Responsibility

X is revising its Terms of Service (ToS) effective Jan. 15, 2026, with an update that more explicitly brings AI activity -- including Grok prompts and outputs -- under the platform's content and misuse rules.

The company announced the new ToS in mid-December, explaining, "We've updated our Terms to better reflect that you are responsible for the content you post and create, including prompts, outputs, and/or information obtained when using X. We've also added additional prohibitions against interfering with the services."

The current ToS, which went into effect back in November 2024, is here; the updated ToS is here. The central change is that the new terms specifically identify AI interactions as "Content" that users are responsible for. Under the prior terms, X defined "Content" broadly and said users are responsible for the content they provide. The 2026 version expands that responsibility to cover any content a user "provide[s], create[s], post[s], or otherwise utilize[s]," and explicitly includes "inputs, prompts, outputs, and/or information obtained or created through the Services."

For users, the new language essentially means that what they type into an AI feature and what the feature returns are treated as content covered by the user agreement. It also reduces a common rhetorical escape hatch for generative AI: the idea that problematic material is "just what the model said." The new terms squarely place responsibility on the user for AI-related content created or used through X.

X's broad content license remains largely the same, including language that is directly relevant to AI training. In both versions, users retain ownership of their content but grant X a worldwide, royalty-free license to use it "for any purpose." Both versions also specify that the license includes the right for X to analyze user text and other information to "provide, promote, and improve the Services," including "for use with and training of our machine learning and artificial intelligence models."

The 2026 revisions do not introduce that right for the first time; rather, they pair it with clearer language stating that prompts and outputs are included in the content a user provides, creates or generates through the services.

The new terms also add an AI-specific misuse provision. The earlier terms already prohibited scraping, bypassing technical limits, reverse engineering and other interference. The 2026 version adds a new restriction barring attempts to "circumvent, manipulate, or disable" systems and services, including through "jailbreaking" and "prompt engineering or injection," when used to override or manipulate safety, security or other platform controls. While "prompt engineering" can describe legitimate prompt refinement, the ToS language frames the prohibited conduct as efforts aimed at defeating safeguards.

Outside AI, X expands jurisdiction-specific framing tied to content enforcement in the European Union and the United Kingdom. The updated summary states that some jurisdictions require enforcement not only against illegal content but also categories deemed "harmful" or "unsafe," and the body of the terms provides examples such as bullying and humiliating content, content encouraging eating disorders, and content that encourages or provides knowledge of methods of self-harm and suicide.

The new summary also adds U.K.-complaint rights under the Online Safety Act of 2023, while retaining references to EU redress under the Digital Services Act.

Here's a summary of the key changes:

Topic Old ToS (Nov. 15, 2024) New ToS (Jan. 15, 2026)
AI prompts/outputs as "Content" Users are responsible for "Content you provide"; prompts/outputs are not explicitly identified. Users are responsible for content they "provide, create, post, or otherwise utilize," including "inputs, prompts, outputs" and information obtained/created through the services.
AI training language The license expressly includes analysis of user text for improving services and "training" ML/AI models. Same training language remains, now alongside explicit inclusion of prompts/outputs in what users provide/create/generate.
AI-related misuse No explicit reference to AI bypass tactics. Adds prohibition on attempts to override platform controls via "jailbreaking" and "prompt engineering or injection."
EU/U.K. "harmful/unsafe" framing General enforcement; EU DSA redress noted. Adds U.K. Online Safety Act complaint rights; expands "harmful/unsafe" framing with examples in EU/U.K. contexts.

For users, the updated terms amount to a clearer warning: AI interactions on X are treated as user content under the agreement, and attempts to defeat safety controls via jailbreak-style techniques are explicitly prohibited.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured