Anthropic Ditches Its Core Safety Promise in the Middle of an AI Red Line Fight with the Pentagon
Anthropic abandoned its 2023 AI safety delay policy amid commercial pressures and a Pentagon dispute over Claude AI guardrails, reflecting a shift toward competitiveness and growth.
- On Tuesday, Anthropic PBC updated its rules to stop delaying development if it lacks a significant lead over competitors, amid a dispute over Claude AI guardrails with the U.S. Defense Department.
- Facing commercial pressure, Anthropic has removed `safely` from its 2024 mission and is racing rivals including OpenAI, valued at over $850 billion, and with IPO plans soon.
- During a Tuesday meeting, U.S. Defense Department officials outlined threats to invoke a Cold War‑era law and the Defense Production Act to compel Anthropic's AI use if it failed to comply by Friday.
- A senior safety researcher announced departure earlier this month, while an Anthropic spokeswoman said fast AI development demands rapid policy updates.
- Industry rivalry is widening as Anthropic faces increasing competition from rivals including OpenAI, while public clashes between Dario Amodei and Sam Altman surfaced last week at the New Delhi summit.
32 Articles
32 Articles
Anthropic loosens its safety promise in the middle of an AI red line fight with the Pentagon
The AI company, founded by people worried about the dangers of the technology, said its previous safety policy was designed to build industry standards around managing AI risks — guardrails that the industry blew through.
Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
By Clare Duffy, Lisa Eadicicco, CNN (CNN) — Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition. Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change. In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-ol…
Anthropic has decided to relax its central security policy. The IA startup, known for its commitment to protection, has assured that this measure is necessary to keep pace in a rapidly changing field and has done so at the same time as maintaining its pulse with the US Department of Defense for maintaining limits for the use of its artificial intelligence tool Claude. In 2023, the company stated in its Responsible Climbing Policy that it would d…
Coverage Details
Bias Distribution
- 93% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium









