Join the event trusted by corporate leaders for almost two decades. VB Transforma connects people building AI Real Enterprise. Learn more
Regular ChatGPT users (among them are the author of this article) may notice that the Hit Chatbot from OpenAI allows users to enter the “temporary chat”, which aims to clear all the information mentioned between the user and the AI model as soon as the chat session is closed by the user. In addition, OpenAI allows users to manually remove previous chatgpt sessions from the side belt in the network and stationary/mobile applications by clicking the left or clicking control or holding/long pressing on their selector.
However, this week Openai turned to the criticism of some of the aforementioned ChatgPT users after discovering that the company has NO In fact, I removed these chat diaries, as indicated earlier.
As influencer AI and software engineer Simon Willison wrote on his personal blog: “Paying customers [OpenAI’s] APIs may decide to switch to other suppliers who may offer retention rules that are not overthrown by this judicial order! “
“Do you tell me that my deleted chatgpt conversations are not actually removed and are saved to examine by the judge?“Published X user @Ns123abcA comment that attracted over a million views.
Other user, @KenooHe added: “Can you “remove” chatgpt chat, but all conversations must be preserved due to legal obligations?“.
Instead, OpenAI confirmed that he has been retained and fleeting users’ chat diaries from mid -2025 in response to the Federal Court’s decision, although it did not reveal it to users until yesterday, June 5.
Order, set below and issued on May 13, 2025by an American judge judge She T. WangIt requires OpenAI to “preserve and segregate all the data journal data, which would otherwise be removed on the basis of the future”, including chats removed at the request of the user or due to privacy obligations.
The Court Directive results from The Novel York Times (NYT) against Openai and MicrosoftCurrently, the three -year copyright case is still argued NOW’Lawyers say that the OpenAI language models turn to the content of copyrights protected by copyright. The plaintiffs claim that dailies, including removal of users, may contain results that violate the law relevant to the claim.
While OpenAI immediately warned the contract, he did not publicly notify users affected for over three weeks, when Opeli issued a blog post and FAQ describing the legal mandate and outlining who has an impact.
However, OpenAi blames directly NOW And the order of the judge, saying that he thinks that the protection demand is “unfounded”.
Opeli explains what is happening to the court order to keep the chatgpt users’ journals – including the chat
IN Blog post published yesterdayOPENAI Operational Director Brad Lightcap He defended the company’s position and stated that he was in favor of the privacy and security of users against excessive court order, writing:
“New York Times and other plaintiff have made a broad and unnecessary demand in their unfounded lawsuit against us: they retain the details of ChatGPT clients and the API interface indefinitely. It is essentially conflicts with the privacy obligations that we have taken our users.”
Post explained it Protection order, Plus, Pro and Team, together with API clients without a zero data stop contract (health)Which means that even if the users of these plans remove their chats or exploit fleeting chat mode, their chats will be stored in a predible future.
However, the subscribers of ChatgPT Enterprise and EDU users, as well as API clients using the health points, are NO Impact on the order and their chats will be removed as recommended.
The detained data is stored under legal holdWhich means that it is stored in a sheltered, segregated system and available only to a compact number of legal employees and security.
“These data are not automatically made available The New York Times Or anyone else, “Lightcap emphasized in the Openai post.
Altman himself sails a up-to-date concept of “AI privilege”, enabling confidential conversations between models and users, as in an interview with a human doctor or lawyer
OPENAI CEO and co -founder Altman itself He also publicly dealt with this problem in a post from his account on Social network X last nightwriting:
“Recently, NYT asked the court to force us not to remove any user chats. We believe that it was an improper request, which is a bad precedent. We refer to the decision. We will fight all the request that violates the privacy of our users; this is the basic principle.”
He also suggested that a wider legal and ethical framework for AI privacy may be needed:
“Recently, we thought about the need for such a thing as” AI privilege “; This really accelerates the need to talk. “
“IMO conversation with AI should be like a conversation with a lawyer or doctor.”
“I hope that society will soon understand this.“
The concept of the privilege of AI-as a potential legal standard-is a confidentiality of a lawyer-class and a pantyer.
Whether such frames would gain adhesion in courtrooms or political circles, but Altman’s remarks indicate that OpenAi can more and more often support such a change.
What will happen next for OpenAI and those fleeting/removed chats?
Opeli filed a formal objection to the court’s decision, asking for its release.
In court files, the company claims that demand has no grounds and that the behavior of billions of additional data points is neither necessary nor proportional.
Judge Wang at the trial of May 27 indicated that the order was fleeting. She instructed the pages to develop a sampling plan to see if the deleted user data are significantly different from the preserved diaries. Opeli has been ordered to submit this proposal to this day on June 6, but I have not seen the application yet.
What does it mean for enterprises and decision -makers responsible for using chatgpt in corporate environments
While the order releases ChatgPT Enterprise and API customers using the Health end points, wider legal and reputational implications are of deep importance for professionals responsible for the implementation and scaling of AI solutions inside the organization.
Those who supervise the full life cycle of immense language models-from taking data to refine and integration-they had to re-assess the assumptions regarding data management. If LLM’s components are subject to legal protection orders, they give urgent questions about where the data go after leaving a secure end point and how to isolate, record or anonymous high -risk interactions.
Each platform affecting the API OPENAI interfaces must confirm which end points (e.g. health vs non-zdr) and make sure that data service rules are reflected in user contracts, audit diaries and internal documentation.
Even if the health points are used, the data cycle rules may require a review to confirm that lower systems (e.g. analytics, recording, backup creation) are not accidentally retaining transient interactions that have been considered miniature -lived.
Security officials responsible for risk management must now extend the modeling of threats to detect legal detection as a potential vector. Teams must check whether the internships of retaining OPENAI facilities are in line with internal controls and risk assessments of other companies and whether users rely on functions such as “temporary chat” that no longer act as expected in legal behavior.
A up-to-date balance point for the privacy and safety of users
This moment is not only a legal skirmish; It is an inflammatory point in a developing conversation regarding privacy and data rights. When developing this problem as part of the “AI privilege”, OpenAI effectively proposes a up-to-date social agreement on how wise systems support confidential input data.
Regardless of whether the courts or legislators assume that framing remains uncertain. But for now, Opeli is caught in an act of balance – between legal compliance, entrepreneurship assurances and users’ trust – and asking louder questions about who controls your data while talking to the machine.