institutional access

You are connecting from
Lake Geneva Public Library,
please login or register to take advantage of your institution's Ground News Plan.

Published loading...Updated

Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

Summary by VentureBeat
Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.

10 Articles

AI models adopt hidden behaviors from seemingly harmless data – even without recognizable clues. Researchers warn: This could be a basic principle of neural networks. The article Anthropic warns: AI systems learn unintentionally problematic behavior patterns first appeared on THE-DECODER.de.

·Germany
Read Full Article

A new Anthropic study shows a surprising phenomenon: AI models become worse in longer thinking processes rather than better. The so-called "Inverse Scaling" affects leading models such as Claude and ChatGPT and this has consequences. (Read more)

A new Anthropic study shows that longer thinking does not make large language models smarter – but more prone to errors. For companies using AI, this finding could have far-reaching consequences. read more on t3n.de

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 100% of the sources are Center
100% Center

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

RTInsights broke the news in on Tuesday, July 22, 2025.
Sources are mostly out of (0)

You have read 1 out of your 5 free daily articles.