Anthropic will train AI models on user chats and code

October 3, 2025

ahmad_superadmin_user

Anthropic has announced that starting September 28, 2025, it will begin training its AI models on user chat transcripts and coding sessions unless users actively choose to opt out. The company has also extended its data retention policy, allowing it to keep user data for up to five years under the new terms. The policy applies to all new and resumed conversations on Claude, Anthropic’s AI platform, but excludes past chats unless they are continued. The change will affect all consumer subscription tiers, including Claude Free, Pro, and Max, as well as Claude Code accounts tied to those plans. However, commercial tiers such as Claude Gov, Claude for Work, Claude for Education, and API use through partners like Amazon Bedrock and Google Cloud will not be impacted. New users will be asked to make their choice during sign-up, while existing users will face a mandatory decision by the deadline through a pop-up prompt. That prompt displays “Updates to Consumer Terms and Policies” in bold text, alongside a large “Accept” button. Beneath it, a toggle is pre-set to allow chats and coding sessions to be used for AI training, which many users may leave unchanged when clicking “Accept.” For those who wish to opt out, the toggle can be switched off immediately. Users who have already accepted can later update their settings by navigating to the Privacy tab in their account preferences, though any change will only apply to future data, not what has already been collected. Anthropic stressed that it employs filtering tools and automated processes to mask or remove sensitive information and clarified that user data is not sold to third parties. The company framed the policy as a way to improve Claude’s accuracy and usefulness while maintaining consumer control. Still, the reliance on default settings and broad acceptance mechanisms raises concerns that many users may inadvertently share their data without realizing the implications. With the deadline approaching, users are encouraged to carefully review the new terms and make an informed choice about whether to contribute their conversations to Anthropic’s ongoing AI training efforts.