Model Comparison

DeepSeek-R1-0528 vs Magistral Small 2506

DeepSeek-R1-0528 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

DeepSeek-R1-0528 outperforms in 4 benchmarks (AIME 2024, AIME 2025, GPQA, LiveCodeBench), while Magistral Small 2506 is better at 0 benchmarks.

DeepSeek-R1-0528 significantly outperforms across most benchmarks.

Fri Apr 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Fri Apr 17 2026 • llm-stats.com
DeepSeek
DeepSeek-R1-0528
Input tokens$0.50
Output tokens$2.15
Best providerDeepinfra
Mistral AI
Magistral Small 2506
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

647.0B diff

DeepSeek-R1-0528 has 647.0B more parameters than Magistral Small 2506, making it 2695.8% larger.

DeepSeek
DeepSeek-R1-0528
671.0Bparameters
Mistral AI
Magistral Small 2506
24.0Bparameters
671.0B
DeepSeek-R1-0528
24.0B
Magistral Small 2506

Context Window

Maximum input and output token capacity

Only DeepSeek-R1-0528 specifies input context (131,072 tokens). Only DeepSeek-R1-0528 specifies output context (131,072 tokens).

DeepSeek
DeepSeek-R1-0528
Input131,072 tokens
Output131,072 tokens
Mistral AI
Magistral Small 2506
Input- tokens
Output- tokens
Fri Apr 17 2026 • llm-stats.com

License

Usage and distribution terms

DeepSeek-R1-0528 is licensed under MIT, while Magistral Small 2506 uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-R1-0528

MIT

Open weights

Magistral Small 2506

Apache 2.0

Open weights

Release Timeline

When each model was launched

DeepSeek-R1-0528 was released on 2025-05-28, while Magistral Small 2506 was released on 2025-06-10.

Magistral Small 2506 is 0 month newer than DeepSeek-R1-0528.

DeepSeek-R1-0528

May 28, 2025

10 months ago

Magistral Small 2506

Jun 10, 2025

10 months ago

1w newer

Knowledge Cutoff

When training data ends

Magistral Small 2506 has a documented knowledge cutoff of 2025-06-01, while DeepSeek-R1-0528's cutoff date is not specified.

We can confirm Magistral Small 2506's training data extends to 2025-06-01, but cannot make a direct comparison without DeepSeek-R1-0528's cutoff date.

DeepSeek-R1-0528

Magistral Small 2506

Jun 2025

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (131,072 tokens)
Higher AIME 2024 score (91.4% vs 70.7%)
Higher AIME 2025 score (87.5% vs 62.8%)
Higher GPQA score (81.0% vs 68.2%)
Higher LiveCodeBench score (73.3% vs 51.3%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-R1-0528
Mistral AI
Magistral Small 2506

FAQ

Common questions about DeepSeek-R1-0528 vs Magistral Small 2506

DeepSeek-R1-0528 significantly outperforms across most benchmarks. DeepSeek-R1-0528 is made by DeepSeek and Magistral Small 2506 is made by Mistral AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-R1-0528 scores MMLU-Redux: 93.4%, SimpleQA: 92.3%, AIME 2024: 91.4%, AIME 2025: 87.5%, MMLU-Pro: 85.0%. Magistral Small 2506 scores AIME 2024: 70.7%, GPQA: 68.2%, AIME 2025: 62.8%, LiveCodeBench: 51.3%.
DeepSeek-R1-0528 supports 131K tokens and Magistral Small 2506 supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (MIT vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-R1-0528 is developed by DeepSeek and Magistral Small 2506 is developed by Mistral AI.