Skip to main content
institutional access

You are connecting from
Lake Geneva Public Library,
please login or register to take advantage of your institution's Ground News Plan.

Published loading...Updated

Anthropic Ditches Its Core Safety Promise in the Middle of an AI Red Line Fight with the Pentagon

Anthropic abandoned its 2023 AI safety delay policy amid commercial pressures and a Pentagon dispute over Claude AI guardrails, reflecting a shift toward competitiveness and growth.

  • On Tuesday, Anthropic PBC updated its rules to stop delaying development if it lacks a significant lead over competitors, amid a dispute over Claude AI guardrails with the U.S. Defense Department.
  • Facing commercial pressure, Anthropic has removed `safely` from its 2024 mission and is racing rivals including OpenAI, valued at over $850 billion, and with IPO plans soon.
  • During a Tuesday meeting, U.S. Defense Department officials outlined threats to invoke a Cold War‑era law and the Defense Production Act to compel Anthropic's AI use if it failed to comply by Friday.
  • A senior safety researcher announced departure earlier this month, while an Anthropic spokeswoman said fast AI development demands rapid policy updates.
  • Industry rivalry is widening as Anthropic faces increasing competition from rivals including OpenAI, while public clashes between Dario Amodei and Sam Altman surfaced last week at the New Delhi summit.
Insights by Ground AI
Podcasts & Opinions

32 Articles

WLWTWLWT
+11 Reposted by 11 other sources
Center

Anthropic loosens its safety promise in the middle of an AI red line fight with the Pentagon

The AI company, founded by people worried about the dangers of the technology, said its previous safety policy was designed to build industry standards around managing AI risks — guardrails that the industry blew through.

·Cincinnati, United States
Read Full Article
KIFIKIFI
+9 Reposted by 9 other sources
Center

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon

By Clare Duffy, Lisa Eadicicco, CNN (CNN) — Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition. Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change. In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-ol…

·Idaho Falls, United States
Read Full Article

Anthropic has decided to relax its central security policy. The IA startup, known for its commitment to protection, has assured that this measure is necessary to keep pace in a rapidly changing field and has done so at the same time as maintaining its pulse with the US Department of Defense for maintaining limits for the use of its artificial intelligence tool Claude. In 2023, the company stated in its Responsible Climbing Policy that it would d…

Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 93% of the sources are Center
93% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

WebProNews broke the news in on Wednesday, February 25, 2026.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal