News
OpenAI Shifts Its Position on Adult Content, Says It's Not the 'Moral Police'
- By John K. Waters
- 10/20/2025
When OpenAI CEO Sam Altman announced last week that ChatGPT would soon allow verified adults to generate erotic content, he described it as part of OpenAI's "treat adult users like adults" principle. The feature, planned for a December rollout, represents a significant policy shift for a company that has maintained relatively strict content restrictions since its launch.
The announcement drew mixed reactions. Some users welcomed the expanded functionality, while critics questioned the timing and effectiveness of age verification systems. The debate highlights broader questions about how AI companies balance user autonomy with safety considerations, particularly as chatbots become increasingly sophisticated and widely used.
Following Market Demand
OpenAI's decision reflects existing user behavior patterns. An April survey of 6,000 regular AI users by the Harvard Business Review found that "companionship and therapy" was the most common use case for AI tools. Separately, Ark Invest reported that adult-focused AI platforms captured 14.5% of the market previously dominated by OnlyFans last year, up from 1.5% the year before.
Several competitors have already moved into this space. Elon Musk's Grok introduced "companion mode" earlier this year with various character options, including sexually suggestive personas. Platforms like Character.ai and Replika have built their business models around AI companionship. Meta has faced scrutiny after reports that its chatbots engaged in sexual conversations with minors.
Safety and Verification Concerns
The announcement came less than two months after OpenAI was sued by parents whose teenage son died by suicide earlier this year, with the lawsuit alleging that ChatGPT provided harmful advice. The timing has raised questions among some observers about how the company will implement effective safeguards.
"Comparing content moderation of chatbot interactions with movie ratings is not really useful," wrote Irina Raicu, director of the Internet Ethics program at Santa Clara University. "It downplays both the nature and the extent of the problems that we're seeing when people get more and more dependent on and influenced by chatbot 'relationships.'"
Mark Cuban expressed skepticism in an X post about age verification effectiveness: "I don't see how OpenAI can age-gate successfully enough," he wrote. "I'm also not sure that it can't psychologically damage young adults. We just don't know yet how addictive LLMs can be."
OpenAI has said it is developing an age prediction system and will default to an under-18 experience when a user's age cannot be confidently confirmed. Adults will have options to verify their age to access adult capabilities. The company has also formed an expert council of mental health professionals and is working on an under-18 version of ChatGPT.
Nearly 20 U.S. states have passed laws requiring age verification for online adult content sites. Jennifer King, a privacy and data policy fellow at Stanford University's Institute for Human-Centered Artificial Intelligence, has noted that "by openly embracing business models that allow access to adult content, mainstream providers like OpenAI will face the burden of demonstrating that they have robust methods for excluding children under 18 and potentially adults under the age of 21."
User Attachment and Business Pressures
The policy change builds on growing evidence that users form emotional connections with AI chatbots. When OpenAI recently replaced GPT-4o with its newer GPT-5 model, users organized opposition. A petition to restore GPT-4o gathered nearly 6,000 signatures. "For many of us, GPT-4o offers a unique and irreplaceable user experience," the petitioners wrote, "combining qualities and capabilities that we value, regardless of performance benchmarks." OpenAI eventually restored access to GPT-4o.
The decision also comes as OpenAI faces business pressures. Bloomberg reported that the company recently completed a deal valuing it at $500 billion. With 800 million weekly active users and significant infrastructure investments, OpenAI needs to convert free users into paying subscribers. Sexual content has historically been a significant driver of internet traffic and revenue.
The shift isn't entirely new. In February, OpenAI updated its Model Spec to relax rules around sexual and violent content in what it called a move away from "AI paternalism." The revised guidelines permitted written erotica in appropriate contexts. The December rollout is simply making public what the company had already begun building.
Regulatory Landscape
The Federal Trade Commission has opened an inquiry into seven AI chatbot developers, including OpenAI, Meta, and xAI, "seeking information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens."
Senator Josh Hawley is circulating a draft bill that would ban AI companions for minors. Multiple states and their attorneys general have enacted age verification measures for online platforms.
California Assemblymember Rebecca Bauer-Kahan, whose bill requiring safety guardrails for companion chatbots was vetoed earlier in October, noted the timing of events. "Here was a bill that was really requiring very clear, safe-by-design AI for children with real liability," she told KQED. "And I think that was further than the industry wanted California to go. I just found the timing of the veto and then this announcement about access to erotica too coincidental not to call out."
Bauer-Kahan expressed concern about AI chatbots' potential effects on young users: "My fear is that we are on a path to creating the next, frankly, more addictive, more harmful version of social media for our children. I do not think that the addictive features in these chatbots that result in our children having relationships with a chatbot instead of their fellow humans is a positive thing."
Balancing Autonomy and Protection
In response to criticism, Altman emphasized that the announcement had "blown up on the erotica point much more than I thought it was going to." He positioned adult content as "just one example of us allowing more user freedom for adults," not a retreat from safety measures.
"We are not the elected moral police of the world," Altman wrote in response to the backlash, adding that ChatGPT would continue to "prioritize safety over privacy and freedom for teenagers" while giving adults more autonomy. "In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here."
Jessica Ji, a senior research analyst at Georgetown's Center for Security and Emerging Technology, has observed that OpenAI faces a fundamental tension in its positioning. While the company promotes narratives about building artificial general intelligence that will transform the economy, its actual operations increasingly resemble those of a social media platform responding to user engagement patterns. This creates a disconnect between the ambitious vision presented to investors and policymakers and the day-to-day realities of running a consumer-facing chatbot service.
Ongoing Questions
The debate over AI-generated adult content intersects with broader questions about AI companionship. Research indicates that some users seek emotional support and connection from AI chatbots, raising questions about the psychological effects of these interactions that researchers are only beginning to study.
Altman said that in the coming weeks, users will be able to better customize ChatGPT's personality. "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing)," he wrote.
As OpenAI implements these changes, the company will face scrutiny over whether its age verification and safety systems work as intended. The move also puts pressure on other AI companies to clarify their own policies around adult content and user safety.
The situation reflects challenges that technology companies have historically faced when introducing features that change how people interact with digital platforms. How effectively OpenAI and its competitors navigate these challenges will likely influence both regulatory responses and industry standards going forward.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].