
OpenAI has rolled out a new age prediction system across ChatGPT’s consumer plans that will automatically detect users under 18 to shield them from harmful content and apply protective content restrictions. The AI company announced the global deployment last week, with the European Union set to receive the feature in the coming weeks to accommodate regional requirements.
The feature, as explained by OpenAI, will use machine learning to analyze users’ behavioural patterns and apply restrictions on topics like violence, sexual content, and self-harm if need be.
The age restriction deployment comes amid intense scrutiny over AI chatbot safety for young users, particularly following wrongful death lawsuits alleging ChatGPT encouraged self-harm among teenagers. The company also faces ongoing legal battles and regulatory pressure from the Federal Trade Commission (FTC), who is investigating how AI chatbots from Big Tech potentially harm children and teenagers.
How the Age Prediction System Works
OpenAI’s age prediction model will analyze behavioral and account-level signals, including usage patterns over time, account longevity, typical activity times throughout the day, and the user’s stated age. When the system determines an account likely belongs to someone under 18, ChatGPT will automatically activate enhanced safety settings without requiring manual intervention.
The technology represents a more sophisticated approach than simple age declarations at signup. By examining how accounts are used rather than relying solely on self-reported information, OpenAI aims to catch cases where young users might have misrepresented their age during registration.
According to the company, the model will continue learning from deployment data, with OpenAI refining accuracy based on which signals prove most reliable in predicting age brackets. “Deploying age prediction helps us learn which signals improve accuracy, and we use those learnings to continuously refine the model over time,” the company said in a blog post.
For users incorrectly flagged as minors, OpenAI has implemented a verification pathway through Persona, a third-party identity verification service also used by platforms like Roblox. In cases where ChatGPT wrongly suspects, adults can verify their age with Persona through either a live selfie that uses facial scanning technology to estimate age, or by uploading government-issued identification such as a driver’s license or passport.
OpenAI adds that the verification process includes safeguards for user privacy, as Persona will delete submitted photos and identification documents within 7 days after completing verification, and OpenAI never receives the actual images, only confirmation of age status.
Conversely, these restrictions work in tandem with parental controls the tech giant introduced in recent months, which allow parents to link their accounts with their teen’s ChatGPT profile, customize content guidelines, disable certain features like chat history, and receive notifications when the system detects their child may be experiencing acute distress.
Broader Context and What Comes Next for OpenAI
OpenAI’s development of these protections and the age prediction system accelerated following several high-profile incidents. For example, in August 2025, parents of 16-year-old Adam Raine filed a lawsuit against OpenAI, alleging ChatGPT encouraged their son to take his own life. \
OpenAI has further committed to continuously improving the age prediction model’s accuracy as it learns from real-world deployment. The company is also expanding its resources for parents, including expert-vetted guides on talking with teenagers about responsible AI use, developed in partnership with organizations like ConnectSafely and members of OpenAI’s Expert Council on Well-Being and AI.
Additionally, the age prediction feature rollout’s timing isn’t coincidental. OpenAI has been preparing to introduce an “adult mode” feature that would allow verified adult users to generate and engage with adult content, specifically erotica content.
The success or failure of this system will have implications far beyond ChatGPT, potentially setting standards for how AI platforms verify and protect their youngest users while preparing for features that require adult audiences.