Model Comparison

Magistral Small 2506 vs Mistral Small 3 24B Instruct

Magistral Small 2506 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

Magistral Small 2506 outperforms in 1 benchmarks (GPQA), while Mistral Small 3 24B Instruct is better at 0 benchmarks.

Magistral Small 2506 significantly outperforms across most benchmarks.

Sat May 16 2026 • llm-stats.com

Arena Performance

Human preference votes

Model Size

Parameter count comparison

0.0M diff

Mistral Small 3 24B Instruct has 0.0B more parameters than Magistral Small 2506, making it 0.0% larger.

Mistral AI
Magistral Small 2506
24.0Bparameters
Mistral AI
Mistral Small 3 24B Instruct
24.0Bparameters
24.0B
Magistral Small 2506
24.0B
Mistral Small 3 24B Instruct

Context Window

Maximum input and output token capacity

Only Mistral Small 3 24B Instruct specifies input context (32,000 tokens). Only Mistral Small 3 24B Instruct specifies output context (32,000 tokens).

Mistral AI
Magistral Small 2506
Input- tokens
Output- tokens
Mistral AI
Mistral Small 3 24B Instruct
Input32,000 tokens
Output32,000 tokens
Sat May 16 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under Apache 2.0.

Both models share the same licensing terms, providing consistent usage rights.

Magistral Small 2506

Apache 2.0

Open weights

Mistral Small 3 24B Instruct

Apache 2.0

Open weights

Release Timeline

When each model was launched

Magistral Small 2506 was released on 2025-06-10, while Mistral Small 3 24B Instruct was released on 2025-01-30.

Magistral Small 2506 is 4 months newer than Mistral Small 3 24B Instruct.

Magistral Small 2506

Jun 10, 2025

11 months ago

4mo newer
Mistral Small 3 24B Instruct

Jan 30, 2025

1.3 years ago

Knowledge Cutoff

When training data ends

Magistral Small 2506 has a knowledge cutoff of 2025-06-01, while Mistral Small 3 24B Instruct has a cutoff of 2023-10-01.

Magistral Small 2506 has more recent training data (up to 2025-06-01), making it potentially better informed about events through that date compared to Mistral Small 3 24B Instruct (2023-10-01).

Magistral Small 2506

Jun 2025

1.7 yr newer
Mistral Small 3 24B Instruct

Oct 2023

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher GPQA score (68.2% vs 45.3%)
Larger context window (32,000 tokens)

Detailed Comparison

AI Model Comparison Table
Feature
Mistral AI
Magistral Small 2506
Mistral AI
Mistral Small 3 24B Instruct

FAQ

Common questions about Magistral Small 2506 vs Mistral Small 3 24B Instruct.

Which is better, Magistral Small 2506 or Mistral Small 3 24B Instruct?

Magistral Small 2506 significantly outperforms across most benchmarks. Magistral Small 2506 is made by Mistral AI and Mistral Small 3 24B Instruct is made by Mistral AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does Magistral Small 2506 compare to Mistral Small 3 24B Instruct in benchmarks?

Magistral Small 2506 scores AIME 2024: 70.7%, GPQA: 68.2%, AIME 2025: 62.8%, LiveCodeBench: 51.3%. Mistral Small 3 24B Instruct scores Arena Hard: 87.6%, HumanEval: 84.8%, MT-Bench: 83.5%, IFEval: 82.9%, MATH: 70.6%.

What are the context window sizes for Magistral Small 2506 and Mistral Small 3 24B Instruct?

Magistral Small 2506 supports an unknown number of tokens and Mistral Small 3 24B Instruct supports 32K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.