As concerns about the impact of AI on adolescent people continue to grow, OpenAI has introduced an “age prediction” feature to ChatGPT to lend a hand identify minors and place reasonable limits on the content of their conversations.
In recent years, OpenAI has been heavily criticized for the impact ChatGPT can have on children. Number teenager suicides have been linked to a chatbot, and like other AI providers, OpenAI has also been criticized for allowing ChatGPT to discuss sexual topics with adolescent users. Last April, the company was forced to fix a bug that allowed its chatbot to generate sexual content for users under 18.
The company has been working on the problem of underage users for some time, and the fresh “age prediction” function only complements existing security measures. The fresh feature uses an artificial intelligence algorithm that evaluates user accounts for specific “behavioral and account-level signals” in an effort to identify adolescent users, says OpenAI in a blog post Tuesday.
The company says these “signals” include the user’s age, how long the account has been in existence, and the times of day the account is typically energetic. The company already has content filters designed to weed out discussions about sex, violence and other potentially problematic topics for users under 18. If the age prediction engine identifies an account as under 18, these filters will be automatically applied.
If a user is mistakenly identified as a minor, there is a way to revert to an “adult” account. OpenAI says they can upload a selfie to Persona, OpenAI’s identity verification partner.
