Model Comparison

Claude Opus 4.6 vs Claude Opus 4.7

Claude Opus 4.7 significantly outperforms across most benchmarks. Claude Opus 4.6 and Claude Opus 4.7 cost the same.

Performance Benchmarks

Comparative analysis across standard metrics

10 benchmarks

Claude Opus 4.6 outperforms in 2 benchmarks (BrowseComp, CyberGym), while Claude Opus 4.7 is better at 8 benchmarks (CharXiv-R, Finance Agent, GPQA, Humanity's Last Exam, MCP Atlas, MMMLU, SWE-Bench Verified, Terminal-Bench 2.0).

Claude Opus 4.7 significantly outperforms across most benchmarks.

Thu Apr 16 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Claude Opus 4.7 costs less

For input processing, Claude Opus 4.6 ($5.00/1M tokens) costs the same as Claude Opus 4.7 ($5.00/1M tokens).

For output processing, Claude Opus 4.6 ($25.00/1M tokens) costs the same as Claude Opus 4.7 ($25.00/1M tokens).

In conclusion, Claude Opus 4.6 and Claude Opus 4.7 cost the same.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Thu Apr 16 2026 • llm-stats.com
Anthropic
Claude Opus 4.6
Input tokens$5.00
Output tokens$25.00
Best providerAnthropic
Anthropic
Claude Opus 4.7
Input tokens$5.00
Output tokens$25.00
Best providerAnthropic
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Both models have the same input context window of 1,000,000 tokens. Both models can generate responses up to 128,000 tokens.

Anthropic
Claude Opus 4.6
Input1,000,000 tokens
Output128,000 tokens
Anthropic
Claude Opus 4.7
Input1,000,000 tokens
Output128,000 tokens
Thu Apr 16 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Claude Opus 4.6 and Claude Opus 4.7 support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Claude Opus 4.6

Text
Images
Audio
Video

Claude Opus 4.7

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under proprietary licenses.

Both models have usage restrictions defined by their respective organizations.

Claude Opus 4.6

Proprietary

Closed source

Claude Opus 4.7

Proprietary

Closed source

Release Timeline

When each model was launched

Claude Opus 4.6 was released on 2026-02-05, while Claude Opus 4.7 was released on 2026-04-16.

Claude Opus 4.7 is 2 months newer than Claude Opus 4.6.

Claude Opus 4.6

Feb 5, 2026

2 months ago

Claude Opus 4.7

Apr 16, 2026

0 days ago

2mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Claude Opus 4.6 is available from Anthropic. Claude Opus 4.7 is available from Anthropic.

Claude Opus 4.6

anthropic logo
Anthropic
Input Price:Input: $5.00/1MOutput Price:Output: $25.00/1M

Claude Opus 4.7

anthropic logo
Anthropic
Input Price:Input: $5.00/1MOutput Price:Output: $25.00/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher BrowseComp score (84.0% vs 79.3%)
Higher CyberGym score (73.8% vs 73.1%)
Higher CharXiv-R score (91.0% vs 77.4%)
Higher Finance Agent score (64.4% vs 60.7%)
Higher GPQA score (94.2% vs 91.3%)
Higher Humanity's Last Exam score (54.7% vs 53.1%)
Higher MCP Atlas score (77.3% vs 62.7%)
Higher MMMLU score (91.5% vs 91.1%)
Higher SWE-Bench Verified score (87.6% vs 80.8%)
Higher Terminal-Bench 2.0 score (69.4% vs 65.4%)

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude Opus 4.6
Anthropic
Claude Opus 4.7

FAQ

Common questions about Claude Opus 4.6 vs Claude Opus 4.7

Claude Opus 4.7 significantly outperforms across most benchmarks. Claude Opus 4.6 is made by Anthropic and Claude Opus 4.7 is made by Anthropic. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Claude Opus 4.6 scores Vending-Bench 2: 100.0%, AIME 2025: 99.8%, Tau2 Telecom: 99.3%, Graphwalks parents >128k: 95.4%, MRCR v2 (8-needle): 93.0%. Claude Opus 4.7 scores GPQA: 94.2%, MMMLU: 91.5%, CharXiv-R: 91.0%, SWE-Bench Verified: 87.6%, BrowseComp: 79.3%.
Both models cost $5.00 per million input tokens.
Claude Opus 4.6 supports 1.0M tokens and Claude Opus 4.7 supports 1.0M tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.