
SEATTLE, Aug. 29, 2025 — Anthropic will begin training its Claude chatbots on transcripts from consumer accounts unless users opt out by Sept. 28, reversing its previous policy against using conversations for model training, the company said Friday.
Anthropic said data from free, Pro and Max subscribers — including sessions in Claude Code — may be stored for as long as five years, far longer than the prior 30-day retention limit. A pop-up labeled “Updates to Consumer Terms and Policies” now prompts existing users to accept the change; the large “Accept” button is prominent, while a smaller toggle that defaults to on lets customers block their data from training.
The new terms do not apply to corporate customers on Claude for Work, Claude for Education or through Amazon Bedrock and Google Cloud’s Vertex AI. Those enterprise contracts still bar Anthropic from using customer data for model improvement.
Anthropic framed the decision as necessary to create “more capable, useful AI models” and strengthen safety systems. The company said it will filter sensitive information and keep data from third parties private.
The policy shift lands amid fresh cybersecurity concerns. Anthropic this week published a threat-intelligence report describing cybercriminals who used Claude Code to help demand ransoms topping $500,000 from 17 victims, and a separate scheme in which North Korean operatives allegedly used Claude to pass technical interviews and hold jobs at Fortune 500 firms.
Privacy advocates criticized the opt-out design as “friction that nudges users toward consent,” arguing the move erodes Anthropic’s reputation as a privacy-first alternative to OpenAI. Industry analysts called the change a pivot to “conventional Big Tech data practices,” noting added language that lets the firm analyze flagged content for research on the “societal impact of AI models” and collect location information.
Users who later want to revoke consent can go to Settings → Privacy and switch off “Help improve Claude.”