Anthropic to start training AI models from users' chat conversations
Anthropic allows users to opt in for their chat data to train AI, retaining data for five years to enhance model safety and capabilities, impacting 18.9 million monthly users.
9 Articles
9 Articles
Anthropic will start training Claude on user data – but you don’t have to share yours
2025-08-29 15:31:00 www.zdnet.com Follow ZDNET: Add us as a preferred source on Google. Anthropic has become a leading AI lab, with one of its biggest draws being its strict position on prioritizing consumer data privacy. From the onset of Claude, its chatbot, Anthropic took a stern stance about not using user data to train its models, deviating from a common industry practice. Source
Now Claude is training on your data: Anthropic changes the rules of the game and will collect your data to improve his IA, unless you explicitly refuse it The landscape of the IA generative is changing at high speed, and one of the most important announcements of this summer 2025 comes from Anthropic, the publisher of the Claude model. So far known for its cautious and oriented approach "security", the company has just revised its General Terms …
Anthropic will start training Claude on user data - but you don't have to share yours - WorldNL Magazine
ZDNETFollow ZDNET: Add us as a preferred source on Google.ZDNET's key takeawaysAnthropic updated its AI training policy.Users can now opt in to having their chats used for training. This deviates from Anthropic's previous stance. Anthropic has become a leading AI lab, with one of its biggest draws being its strict position on prioritizing consumer data privacy. From the onset of Claude, its chatbot, Anthropic took a stern stance about not using …
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium