Friday, March 13, 2026

Anthropic will start training AI models on chat transcript

Share

Anthropic will start training AI models for user data, including recent chat transcriptions and coding sessions, unless users decide to give up. It also extends the rules for storing data for five years – again for users who do not decide to resign.

All users will have to make a decision by September 28. For users who click “accept” now, Anthropic will immediately start training their models on their data and storage of these data for up to five years, according to a blog post published by Anthropic on Thursday.

The setting applies to “new or renewed cods and coding sessions”. Even if you agree to anthropic training, which AI models on your data will not do it with previous conversations or coding sessions that you have not resumed. But if you continue the aged chat or coding session, all plants are turned off.

The updates apply to all Claude consumer subscription levels, including Claude Free, Pro and Max, “In this when they use Claude code from accounts related to these plans,” wrote Anthropic. But they do not apply to commercial levels of Anthropica exploit, such as Claude Gov, Claude for Work, Claude for Education or API, “including through third parties such as Amazon Bedrock and Vertex AI Google Cloud.”

Fresh users will have to choose their preferences through the Claude registration process. Current users must decide via a pop-up window that can postpone by clicking the “not now” button-though they will be forced to make a decision on September 28.

However, it should be remembered that many users can accidentally and quickly hit “accept” without reading what they agree to.

Pop-up, which users will see, in capital letters “updates of consumer conditions and principles”, and the following lines say: “Updating our consumer conditions and privacy policy will take part on September 28, 2025. You can accept updated conditions.” At the bottom there is a enormous black “accept” button.

In a smaller print below, a few lines say: “Let us use chats and coding sessions for training and improving anthropic AI models” with switching switching / off next to it. It is automatically set to “enabled”. Seemingly many users will immediately click the enormous “Accept” button without changing the switch, even if they did not read it.

If you want to give up, you can switch the switch to “off” when you see a pop -up window. If you have already accepted without realizing and you want to change your decision, go to your settings, then the privacy card, then the Privacy Settings section, and finally switch to “turned off” as part of the “Help improve Claude” option. Consumers can change the decision at any time via privacy settings, but this recent decision will apply only to future data – the data in which the system has already been trained cannot be recovered.

“To protect the privacy of users, we use a combination of tools and automated processes for filtering or obscuring confidential data,” wrote Antropic in a blog post. “We don’t sell user data to third pages.”

Latest Posts

More News