Model Comparison

Claude Opus 4 vs DeepSeek-R1-0528

Both models are evenly matched across the benchmarks. DeepSeek-R1-0528 is 32.9x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

Claude Opus 4 outperforms in 2 benchmarks (SWE-Bench Verified, Terminal-Bench), while DeepSeek-R1-0528 is better at 2 benchmarks (AIME 2025, GPQA).

Both models are evenly matched across the benchmarks.

Fri Apr 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

DeepSeek-R1-0528 costs less

For input processing, Claude Opus 4 ($15.00/1M tokens) is 30.0x more expensive than DeepSeek-R1-0528 ($0.50/1M tokens).

For output processing, Claude Opus 4 ($75.00/1M tokens) is 34.9x more expensive than DeepSeek-R1-0528 ($2.15/1M tokens).

In conclusion, Claude Opus 4 is more expensive than DeepSeek-R1-0528.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Fri Apr 17 2026 • llm-stats.com
Anthropic
Claude Opus 4
Input tokens$15.00
Output tokens$75.00
Best providerAnthropic
DeepSeek
DeepSeek-R1-0528
Input tokens$0.50
Output tokens$2.15
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Claude Opus 4 accepts 200,000 input tokens compared to DeepSeek-R1-0528's 131,072 tokens. DeepSeek-R1-0528 can generate longer responses up to 131,072 tokens, while Claude Opus 4 is limited to 32,000 tokens.

Anthropic
Claude Opus 4
Input200,000 tokens
Output32,000 tokens
DeepSeek
DeepSeek-R1-0528
Input131,072 tokens
Output131,072 tokens
Fri Apr 17 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Claude Opus 4 supports multimodal inputs, whereas DeepSeek-R1-0528 does not.

Claude Opus 4 can handle both text and other forms of data like images, making it suitable for multimodal applications.

Claude Opus 4

Text
Images
Audio
Video

DeepSeek-R1-0528

Text
Images
Audio
Video

License

Usage and distribution terms

Claude Opus 4 is licensed under a proprietary license, while DeepSeek-R1-0528 uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Claude Opus 4

Proprietary

Closed source

DeepSeek-R1-0528

MIT

Open weights

Release Timeline

When each model was launched

Claude Opus 4 was released on 2025-05-22, while DeepSeek-R1-0528 was released on 2025-05-28.

DeepSeek-R1-0528 is 0 month newer than Claude Opus 4.

Claude Opus 4

May 22, 2025

11 months ago

DeepSeek-R1-0528

May 28, 2025

10 months ago

6d newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Claude Opus 4 is available from Anthropic, Bedrock, Google. DeepSeek-R1-0528 is available from DeepInfra, DeepSeek, Novita.

Claude Opus 4

anthropic logo
Anthropic
Input Price:Input: $15.00/1MOutput Price:Output: $75.00/1M
bedrock logo
AWS Bedrock
Input Price:Input: $15.00/1MOutput Price:Output: $75.00/1M
google logo
Google
Input Price:Input: $15.00/1MOutput Price:Output: $75.00/1M

DeepSeek-R1-0528

deepinfra logo
Deepinfra
Input Price:Input: $0.50/1MOutput Price:Output: $2.15/1M
deepseek logo
DeepSeek
Input Price:Input: $0.55/1MOutput Price:Output: $2.19/1M
novita logo
Novita
Input Price:Input: $0.70/1MOutput Price:Output: $2.50/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (200,000 tokens)
Supports multimodal inputs
Higher SWE-Bench Verified score (72.5% vs 44.6%)
Higher Terminal-Bench score (39.2% vs 5.7%)
Less expensive input tokens
Less expensive output tokens
Has open weights
Higher AIME 2025 score (87.5% vs 75.5%)
Higher GPQA score (81.0% vs 79.6%)
AnthropicClaude Opus 4
DeepSeekDeepSeek-R1-0528

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude Opus 4
DeepSeek
DeepSeek-R1-0528

FAQ

Common questions about Claude Opus 4 vs DeepSeek-R1-0528

Both models are evenly matched across the benchmarks. Claude Opus 4 is made by Anthropic and DeepSeek-R1-0528 is made by DeepSeek. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Claude Opus 4 scores MMMLU: 88.8%, TAU-bench Retail: 81.4%, GPQA: 79.6%, MMMU (validation): 76.5%, AIME 2025: 75.5%. DeepSeek-R1-0528 scores MMLU-Redux: 93.4%, SimpleQA: 92.3%, AIME 2024: 91.4%, AIME 2025: 87.5%, MMLU-Pro: 85.0%.
DeepSeek-R1-0528 is 30.0x cheaper for input tokens. Claude Opus 4 costs $15.00/M input and $75.00/M output via anthropic. DeepSeek-R1-0528 costs $0.50/M input and $2.15/M output via deepinfra.
Claude Opus 4 supports 200K tokens and DeepSeek-R1-0528 supports 131K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (200K vs 131K), input pricing ($15.00 vs $0.50/M), multimodal support (yes vs no), licensing (Proprietary vs MIT). See the full comparison above for benchmark-by-benchmark results.
Claude Opus 4 is developed by Anthropic and DeepSeek-R1-0528 is developed by DeepSeek.