Wednesday, March 11, 2026

No, ChatGPT has not added a ban on providing legal and health advice

Share

OpenAI says ChatGPT’s behavior “remains unchanged” following reports on social media falsely claiming that recent updates to its terms of exploit prevent the chatbot from providing legal and medical advice. Karan Singhal, Head of AI Health at OpenAI, writes with X that these claims are “untrue.”

“ChatGPT has never replaced professional advice, but it will continue to be a great resource to help people understand legal and health information,” says Singhal, responding to a now-deleted post from betting platform Kalshi that stated that “JUST IN: ChatGPT will no longer provide health or legal advice.”

According to Singhal, the inclusion of legal and medical advice policies “is not a new change to our circumstances.”

Novel policy update October 29 provides a list of things ChatGPT cannot be used for, one of which is “providing tailored advice requiring a license, such as legal or medical advice, without the appropriate involvement of a licensed professional.”

This remains similar to OpenAI previous ChatGPT usage principles that say users should not perform activities that “may significantly affect the safety, well-being, or rights of others,” including “providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosing the use of AI assistance and its potential limitations.”

OpenAI previously had three separate policies, including a “universal” policy and one for the exploit of ChatGPT and APIs. With the recent update, the company has one unified list of policies that, according to the changelog, “reflect the universal set of policies for OpenAI products and services,” but the policies remain the same.

Latest Posts

More News