
OpenAI’s next major update in ChatGPT has invited controversy, even long before its launch. Starting December 2025, and marking a sharp turn from the AI giant’s previous cautious stance on explicit content, the company will roll out an update that allows verified adult users generate erotica content.
OpenAI CEO Sam Altman in an X post explains the principle behind the update as a “treat adult users like adults” thing. However, critics and users alike see this update as risky on the part of OpenAI, as they’d be massively contributing to the erosion of public trust, a rise in sexual harassment, and go as far as exposing young users to harm.
Although Altman says age-gating as part of the “treat adult users like adult” principle will be a key part of this update, users are worried about the effectiveness of it, with the most important question being how OpenAI will ensure it’s not a teen behind the phone. This is the case because many teens are known to have fake government IDs to bypass age-gating systems.
Altman also explains that this update will open up the previous restrictive system that was as a result of ChatGPT-sponsored mental health issues. The CEO argues that they “are going to be able to safely relax the restrictions in most cases,” to accommodate users who have no mental health issues and make the experience more enjoyable for them.
However, while the move may embrace and give space for adult autonomy and an enjoyable experience with AI, it also risks amplifying ongoing mental health challenges that are already tied to AI.
Additionally, as it exposes how AI companies may be exploiting vulnerabilities in cases like mental health issues and child sexual abuse material (CSAM) for capitalist gain, it also raises questions about the efficacy of regulatory guardrails from regulatory bodies.
Exploiting Exploitations: How This Update Further Contributes To AI-Sponsored Mental Health Issues
The integration of erotica into ChatGPT for adult users does more than introduce new features and content categories, as it deeply intersects with existing mental health controversies surrounding AI chatbots.
In August this year, a California-based couple sued OpenAI over the death of their teenage son, allegedly accusing ChatGPT of goading the 16-year-old into taking his life. According to BBC, who reported on the lawsuit, the couple argued that the AI-powered chatbot validated his “most harmful and self-destructive thoughts.”
Building on this sensitive case, critics have warned that the introduction of erotica content will further exacerbate mental health issues, as well as sexual harassment. In recent cases where many people, most especially men, have indulged in using AI to undress women and sexually harass them, OpenAI’s next update stands as an intensifier.
Although OpenAI promises age-gating, there is still a belief that there will be unrestricted access to sexually explicit AI content, which may also worsen emotional dependencies and psychological distress that are already linked to AI companionship.
“It is no secret that sexual content is one of the most popular and lucrative aspects of the internet,” Jennifer King, a privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence wrote to critique the move.
“By openly embracing business models that allow access to adult content, mainstream providers like OpenAI will face the burden of demonstrating that they have robust methods for excluding children under 18 and potentially adults under the age of 21,” King continued.
Furthermore, OpenAI introducing erotica content into ChatGPT to serve more adult users feeds into a wider pattern of emotional commodification, where companies monetize intimate user interactions.
Altman, however, argues that erotica content will only be provided to users who explicitly ask for it, writing that if adult users want “ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”
Still, it raises important ethical questions about where the boundary lies between catering to users’ demands and exploiting human vulnerabilities for profit.
The Makers-Users Conundrum: Navigating Backlash, Revenue, And User Demands
With this update, OpenAI has found itself in a conundrum of giving in to users’ desires for personalized and less restrictive experiences with ChatGPT, mitigating backlash from parents, regulators, and mental health advocates, as well as aggressively pursuing revenue without a duty of care.
American businessman and prolific investor Mark Cuban encapsulated the conundrum bluntly, warning on an X post that no parent will trust OpenAI’s age verification and that the fallout could drive families and school to abandon using ChatGPT entirely.
“This is going to backfire. Hard. No parent is going to trust that their kids can’t get through your age gating. They will just push their kids to every other LLM,” Cuban noted.
The businessman also added that it is less about porn, and more of “kids developing “relationships” with an LLM that could take them in any number of very personal directions.”
“Parents today are afraid of books in libraries that kids don’t read. They ain’t seen nothing yet,” Cuban wrote. “Which in my OPINION, means that parents and schools, that would otherwise want to use ChatGPT because of its current ubiquity, will decide not to use it. It will be an ongoing battle for OpenAI. I don’t see the upside for them.”
Still, the AI giant pledges to treat users who experience mental users differently, with the aim to balance freedom and safety. Whether this nuanced approach will satisfy users, regulatory bodies, or instead increase controversies remains to be seen.
Will Regulation In Select Regions Put In Place Effective Guardrails?
Regulatory bodies, especially in the U.S. and Europe, have been intensifying scrutiny on tech companies’ responsibilities to protect users, particularly minors.
Recently, the Federal Trade Commission (FTC) launched an investigation into seven major tech companies, including ChatGPT, about potential child safety issues associated with AI. The FTC ordered these companies to hand over reams of internal data detailing how their AI-powered chatbots are built, marketed, and policed for potential harm to children and teenagers.
Although regulations signify important steps toward protecting vulnerable users, there is a question of its practical effectiveness. For instance, the FTC inquiry functions largely as an exploratory probe rather than immediate enforcement, which also reflects the wider challenges that regulators face in keeping pace with rapidly evolving AI tech and deployment models.
In the U.S. for example, legal and regulatory frameworks still remain fragmented, where state-level AI chatbot laws are not unified, creating an inconsistent compliance landscape for AI systems.
Recently, California Governor Gavin Newsom vetoed a legislation that would have prohibited AI companies from offering AI chatbots to children and teenagers, arguing that “AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems.”
In Europe there’s also the EU AI Act that’s supposed to act as a guardrail for AI developers and AI systems. The regulatory framework categorizes AI systems into risk levels, from minimal to unacceptable.
For instance, AI systems providing adult content, including sexually explicit material, fall under strict requirements, with mandated age assurance for access to prevent minors from exposure. On the other hand, high-risk AI systems must undergo conformity assessments, maintain transparent operations, and implement human oversight.
However, it remains uncertain whether OpenAI’s upcoming erotica content update, scheduled for release in December, will fall definitively within the scope of the Act or even how swiftly enforcement will impact the update rollout.