Mark Cuban warns that OpenAI’s new plan to allow adults-only erotica in ChatGPT could ‘backfire. Hard’

Billionaire investor Mark Cuban warns that OpenAI is entering into a massive crisis of confidence with parents and schools after CEO Sam Altman announced that the company plans to start allowing erotica in ChatGPT for “verified adults” starting in December.
Cuban called the move reckless and said parents would abandon ChatGPT the minute they thought their children could bypass the company’s age verification system to access inappropriate content.
Cuban wrote in response to Altman on the
In other words: If there’s any possibility that minors could access explicit content — including AI-generated content — parents and school districts will shut it down before testing the security features, making it an unsmart business strategy.
However, Altman argued in his original post announcing the change that ChatGPT had become “restrictive” and “less fun” since the company restricted the voice of its featured chatbot in response to criticism that it was leading to mental health issues. He added that the upcoming update will allow for a product that “behaves like what people loved about 4.o.”
Psychological concerns
Cuban has repeatedly stressed in his additional posts that the controversy is not about adults accessing erotica. It’s about children forming romantic relationships with AI without their parents knowing, and those relationships will likely go sideways.
“I’ll say it again. This is not about porn,” he wrote. “It’s about developing ‘connections’ between kids with an MBA that can take them in any number of very personal directions.”
Sam Altman has, in the past, seemed wary of allowing sexual conversations on his platform. In an interview in August, tech journalist Cleo Abram asked Altman to give an example of a business decision that was better for the world at the expense of his company’s rise.
“Well, we haven’t put a sex bot avatar in ChatGPT yet,” Altman said.
In the wake of money
The move comes amid growing concerns that the billions pouring into AI may not translate into sustainable revenues or deliver on the hype-driven industry’s promises. Altman — though he acknowledged that investors may be “overly excited” about AI — has shared speculation that AI will soon surpass human capabilities, leading to an abundance of “intelligence and power” in 2030. In September, Altman shared his dreams in a blog post that AI might in the future be able to cure cancer or provide personalized tutoring for everyone. student on earth.
However, announcements like allowing erotica in ChatGPT may indicate that AI companies are fighting harder than ever for growth, and will sacrifice long-term consumer trust for short-term profit. Recent research by Deutsche Bank shows that consumer demand for OpenAI subscriptions in Europe has remained flat, and user spending on ChatGPT has broadly “stalled off.”
“The poster child of the AI boom may struggle to recruit new subscribers to pay for it,” analysts Adrian Cox and Stephane Abrodan said in a note to clients.
AI companion platforms, such as Replika and Cacter.ai, have shown how quickly users — especially teens — can form emotional connections with chatbots. The Common Sense Media report found that half of teens regularly use AI companions, a third have chosen AI companions over humans for serious conversations, and a quarter have shared personal information with these platforms. With input from researchers at Stanford University, the group argued that such chatbots should be illegal for children to use, due to the heightened risks of addiction or self-harm.
OpenAI did not immediately respond Luck Request for comment.
Parents urge action
OpenAI is already under fire after it was sued by the family of 16-year-old Adam Ren, who died by suicide in April after having extensive conversations with ChatGPT. The family claims ChatGPT convinced Ryan to commit suicide and helped him plan it.
“This tragedy was not simply a glitch or unforeseen condition, but rather a foreseeable result of intentional design choices,” the lawsuit said.
In another high-profile case, Florida mother Megan Garcia sued AI Character Technologies last year for wrongful death, alleging that its chatbot played a role in the suicide of her 14-year-old son, Sewell Setzer III. In her testimony before the US Senate, Garcia said her son became “increasingly isolated from real life” and was drawn into sexually explicit conversations with the company’s artificial intelligence system.
“Instead of preparing for high school achievement, Sewell spent the final months of his life being exploited and sexually groomed by chatbots,” Garcia testified. The company was accused of designing AI systems to appear emotionally human “to gain his trust and keep him endlessly engaged.”
She was not the only parent to testify. Another Texas mother, speaking under the name “Ms. Jane Doe, told lawmakers that her teenage son’s mental health collapsed after months of late-night conversations with similar chat shows. He is now in residential treatment,” she said.
Both mothers urged congress to restrict sexually explicit AI systems, warning that AI chatbots could quickly form manipulative emotional dependencies with minors — exactly the scenario Cuban says OpenAI risks. Unlike TikTok or Instagram, where content can be reported, individual AI conversations are private and difficult to monitor.
“Parents today are afraid of books in libraries,” Cuban wrote. “They haven’t seen anything yet.”
Don’t miss more hot News like this! Click here to discover the latest in Business news!
2025-10-15 19:39:00