Published • loading... • Updated
Anthropic reveals that as few as '250 malicious documents' are all it takes to poison an LLM's training data, regardless of model size
Summary by Pcgamer
1 Articles
1 Articles
Coverage Details
Total News Sources1
Leaning Left0Leaning Right0Center0Last UpdatedBias DistributionNo sources with tracked biases.
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium
