Model Comparison

DeepSeek-V3 0324 vs Magistral Small 2506

Magistral Small 2506 shows notably better performance in the majority of benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

DeepSeek-V3 0324 outperforms in 1 benchmarks (GPQA), while Magistral Small 2506 is better at 2 benchmarks (AIME 2024, LiveCodeBench).

Magistral Small 2506 shows notably better performance in the majority of benchmarks.

Wed Apr 29 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 29 2026 • llm-stats.com
DeepSeek
DeepSeek-V3 0324
Input tokens$0.28
Output tokens$1.14
Best providerNovita
Mistral AI
Magistral Small 2506
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

647.0B diff

DeepSeek-V3 0324 has 647.0B more parameters than Magistral Small 2506, making it 2695.8% larger.

DeepSeek
DeepSeek-V3 0324
671.0Bparameters
Mistral AI
Magistral Small 2506
24.0Bparameters
671.0B
DeepSeek-V3 0324
24.0B
Magistral Small 2506

Context Window

Maximum input and output token capacity

Only DeepSeek-V3 0324 specifies input context (163,840 tokens). Only DeepSeek-V3 0324 specifies output context (163,840 tokens).

DeepSeek
DeepSeek-V3 0324
Input163,840 tokens
Output163,840 tokens
Mistral AI
Magistral Small 2506
Input- tokens
Output- tokens
Wed Apr 29 2026 • llm-stats.com

License

Usage and distribution terms

DeepSeek-V3 0324 is licensed under MIT + Model License (Commercial use allowed), while Magistral Small 2506 uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-V3 0324

MIT + Model License (Commercial use allowed)

Open weights

Magistral Small 2506

Apache 2.0

Open weights

Release Timeline

When each model was launched

DeepSeek-V3 0324 was released on 2025-03-25, while Magistral Small 2506 was released on 2025-06-10.

Magistral Small 2506 is 3 months newer than DeepSeek-V3 0324.

DeepSeek-V3 0324

Mar 25, 2025

1.1 years ago

Magistral Small 2506

Jun 10, 2025

10 months ago

2mo newer

Knowledge Cutoff

When training data ends

Magistral Small 2506 has a documented knowledge cutoff of 2025-06-01, while DeepSeek-V3 0324's cutoff date is not specified.

We can confirm Magistral Small 2506's training data extends to 2025-06-01, but cannot make a direct comparison without DeepSeek-V3 0324's cutoff date.

DeepSeek-V3 0324

Magistral Small 2506

Jun 2025

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (163,840 tokens)
Higher GPQA score (68.4% vs 68.2%)
Higher AIME 2024 score (70.7% vs 59.4%)
Higher LiveCodeBench score (51.3% vs 49.2%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-V3 0324
Mistral AI
Magistral Small 2506

FAQ

Common questions about DeepSeek-V3 0324 vs Magistral Small 2506

Magistral Small 2506 shows notably better performance in the majority of benchmarks. DeepSeek-V3 0324 is made by DeepSeek and Magistral Small 2506 is made by Mistral AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-V3 0324 scores MATH-500: 94.0%, MMLU-Pro: 81.2%, GPQA: 68.4%, AIME 2024: 59.4%, LiveCodeBench: 49.2%. Magistral Small 2506 scores AIME 2024: 70.7%, GPQA: 68.2%, AIME 2025: 62.8%, LiveCodeBench: 51.3%.
DeepSeek-V3 0324 supports 164K tokens and Magistral Small 2506 supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (MIT + Model License (Commercial use allowed) vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-V3 0324 is developed by DeepSeek and Magistral Small 2506 is developed by Mistral AI.