ChatGPT Will Now Try to Guess Your Age

OpenAI introduces a new age-prediction system aimed at improving online safety for younger users.

Stay Connected, Stay Informed - Follow News Alert on WhatsApp for Real-time Updates!

OpenAI has announced that ChatGPT will soon be able to estimate the age of its users, even when they do not explicitly state it. The move is designed to strengthen protections for minors and reduce their exposure to potentially harmful content on one of the world’s most widely used artificial intelligence platforms.

The company revealed the development in a recent blog post, outlining how the new system will become part of user accounts on ChatGPT. The feature marks a significant step in OpenAI’s broader effort to address concerns around child safety and responsible AI use.

Why OpenAI Is Introducing Age Prediction

According to OpenAI, the new technology specifically targets users under the age of 18 who do not disclose their real age. In such cases, ChatGPT will attempt to estimate their age automatically.

The goal is not to invade privacy, the company says, but rather to ensure that minors receive stronger safeguards when using AI-powered chatbots. These safeguards include restricting access to sensitive or potentially harmful material.

This comes at a time when digital platforms face growing pressure from regulators, parents, and advocacy groups to better protect children online. AI systems, in particular, have drawn scrutiny because of their ability to generate realistic content on almost any topic.

How the System Works

OpenAI explains that the software will analyze a combination of behavioral patterns and technical signals to estimate a user’s age.

These include:

  • How long the account has existed

  • When and how often it is active

  • Patterns in user interaction and behavior

  • Other internal indicators linked to usage

By combining these factors, the system will generate an approximate age profile rather than a precise figure.

However, OpenAI has not disclosed the exact algorithms or models used. The company says this is to prevent users from gaming the system or manipulating the results.

What Happens If the System Gets It Wrong?

Recognizing that no automated system is perfect, OpenAI has included a way for users to challenge an incorrect age classification.

If the system mistakenly identifies someone as underage, the user will be able to verify their age through an identity verification service. This process will require:

  • A live selfie

  • An official identity card

Only after verification will access be restored to content appropriate for adults.

While this adds an extra step for some users, OpenAI argues that it is necessary to balance user convenience with child safety.

A Long-Announced Feature

The development of this age-prediction system is not sudden. OpenAI first announced work on it in September 2025, as part of a broader strategy to improve safety features for younger users.

At that time, the company said the new tools were aimed at ensuring that minors would be less likely to encounter sensitive, explicit, or potentially harmful material generated by AI.

Since then, OpenAI has been gradually introducing additional parental controls and safety layers into ChatGPT.

Accuracy Still Unclear

Despite the announcement, OpenAI has admitted that it is not yet clear how accurate the system will be.

This uncertainty is significant, especially considering the scale of the platform. ChatGPT reportedly has around 800 million users worldwide, making it one of the largest AI-driven platforms in existence.

At that scale, even a small error rate could affect millions of users. Accurately predicting age based only on behavior, without explicit input, remains a major technical challenge.

Growing Pressure on AI Companies

OpenAI and other AI firms have been under intense criticism in recent years over the potential risks their products pose to children.

Some critics have accused AI platforms of contributing indirectly to cases of self-harm and even deaths among minors, particularly when vulnerable users engage with chatbots in emotionally sensitive situations.

While such claims are complex and often disputed, they have pushed technology companies to act more decisively.

In response, OpenAI announced in 2025 that it would introduce enhanced parental controls within ChatGPT. These tools allow parents to limit usage times, restrict certain types of content, and monitor interactions more closely.

A Wider Trend in Tech

OpenAI’s move reflects a broader trend across the tech industry. Social media platforms like TikTok, Instagram, and YouTube have also introduced age-detection tools and stricter policies for young users.

Governments in Europe, the United States, and parts of Asia are now considering or enforcing regulations that require companies to verify user ages more reliably, especially for services accessible to children.

What This Means for Users

For adult users, the new system may go largely unnoticed unless it makes an error. For minors, however, it could mean a more restricted and safer experience on ChatGPT.

More importantly, it signals a shift in how AI companies approach responsibility: moving from relying solely on self-reported data to actively assessing user safety risks.

As AI tools become more deeply embedded in daily life, features like this may soon become standard across the industry.

Whether OpenAI’s age-prediction system will prove reliable remains to be seen. However, its introduction highlights a growing recognition that technological innovation must go hand in hand with user protection, especially when young people are involved.

Leave a Comment

This material may not be published, broadcast, rewritten, redistributed or derived from.
Unless otherwise stated, all content is copyrighted © 2025 News Alert.