On August 28, 2025, Anthropic announced updates to the Consumer Terms and Privacy Policy for Claude’s AI services. The primary change is that user data – specifically, chats and coding sessions – will now be utilized to train and enhance Claude’s AI models unless users opt out. The changes also extend data retention periods for chats and coding sessions.
These changes apply to users of Claude’s Free, Pro, and Max plans, unless they opt out. New users are required to select their preference during signup, while existing users receive in-app notifications and must make a selection by September 28, 2025 to continue using Claude. Preferences may be adjusted at any time through Privacy Settings.
Previously, Anthropic’s documentation stated that user data was not used for model training and was generally deleted within 30 days; under the new policy, data may be retained for up to five years and used for training unless users opt out. These updates do not apply to services governed by Commercial Terms – such as Claude for Work, Claude for Government, Claude for Education, or API access via third-party platforms – and represent a notable shift in Anthropic’s approach to user data, once characterized by a stronger privacy focus.
The new terms and retention policy take effect immediately upon acceptance and apply only to new or resumed interactions. It is important to note that the consent flow for the newly introduced data usage practice is structured to encourage acceptance rather than present the user’s choice neutrally: the data-sharing toggle is pre-checked and the ‘Accept’ button locks in that choice, making it significantly easier for users to agree than to decline. Users often assume that paid plans (such as Claude Pro and Max) exclude input data from model training, consistent with the common practice of AI providers like OpenAI. If the update notice is overlooked and the default opt-in remains unchanged, users may inadvertently permit sensitive information to be used for training, creating risks of breaching contractual obligations concerning confidentiality, intellectual property, and privacy, as well as potential violations of privacy regulations.
Companies should therefore exercise caution, and are advised to assess whether these changes impact them and, where applicable, manage their preferences accordingly, including auditing employee and service provider selections and providing appropriate guidance where relevant.
Our team at Arnon continues to monitor AI industry development and provide timely insights to our clients and partners.
The above content is a summary provided for informational purposes only and does not constitute legal advice. It should not be relied upon without obtaining further professional legal counsel.