Anthropic's Powerful Opus 4.1 Model Is Here - How to Access It (and Why You'll Want To)
AUG 5 – Claude Opus 4.1 achieves 74.5% software engineering accuracy and improves agentic tasks, reasoning, and coding performance to maintain Anthropic's lead amid growing AI competition.
- Anthropic released Claude Opus 4.1 on Tuesday, upgrading its flagship AI model for agentic tasks, coding, and reasoning across platforms.
- This release follows the May debut of Claude Opus 4 and aims to maintain leadership amid upcoming OpenAI GPT-5 announcements expected in days.
- Claude 4.1 improved coding accuracy to 74.5% on SWE-bench Verified, outperforming OpenAI's 69.1% and Google's 67.2%, with strengths in complex code refactoring.
- The model offers hybrid reasoning balancing quick responses and deep problem-solving, expanded context handling up to 32,000 tokens, and powers tools like Claude Code and APIs.
- Anthropic's rapid growth faces risks from high revenue concentration among two customers and stiff competition, but further model improvements are planned in the coming weeks.
21 Articles
21 Articles
Claude Opus 4.1 Thinking Model Now Available for Perplexity Max Subscribers | 📲 LatestLY
Claude Opus 4.1 Thinking model is now available for Perplexity Max subscribers, giving them access to one of the most advanced reasoning models alongside others like Grok 4, o3, o3-pro, and Claude 4.0 Sonnet Thinking. The Max plan also offers exclusive access to Claude Opus 4.1 and o3-pro for overall best AI experience. 📲 Claude Opus 4.1 Thinking Model Now Available for Perplexity Max Subscribers.


Anthropic’s new Claude 4.1 dominates coding tests days before GPT-5 arrives
Anthropic's Claude Opus 4.1 achieves 74.5% on coding benchmarks, leading the AI market, but faces risk as nearly half its $3.1B API revenue depends on just two customers.
Coverage Details
Bias Distribution
- 50% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium