Model Comparison

Ministral 3 (8B Instruct 2512) vs Ministral 8B Instruct

Both models are evenly matched across the benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

Ministral 3 (8B Instruct 2512) outperforms in 1 benchmarks (MATH), while Ministral 8B Instruct is better at 1 benchmark (Arena Hard).

Both models are evenly matched across the benchmarks.

Thu Apr 02 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Thu Apr 02 2026 • llm-stats.com
Mistral AI
Ministral 3 (8B Instruct 2512)
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Mistral AI
Ministral 8B Instruct
Input tokens$0.10
Output tokens$0.10
Best providerMistral
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

19.8M diff

Ministral 8B Instruct has 0.0B more parameters than Ministral 3 (8B Instruct 2512), making it 0.2% larger.

Mistral AI
Ministral 3 (8B Instruct 2512)
8.0Bparameters
Mistral AI
Ministral 8B Instruct
8.0Bparameters
8.0B
Ministral 3 (8B Instruct 2512)
8.0B
Ministral 8B Instruct

Context Window

Maximum input and output token capacity

Only Ministral 8B Instruct specifies input context (128,000 tokens). Only Ministral 8B Instruct specifies output context (128,000 tokens).

Mistral AI
Ministral 3 (8B Instruct 2512)
Input- tokens
Output- tokens
Mistral AI
Ministral 8B Instruct
Input128,000 tokens
Output128,000 tokens
Thu Apr 02 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Ministral 3 (8B Instruct 2512) supports multimodal inputs, whereas Ministral 8B Instruct does not.

Ministral 3 (8B Instruct 2512) can handle both text and other forms of data like images, making it suitable for multimodal applications.

Ministral 3 (8B Instruct 2512)

Text
Images
Audio
Video

Ministral 8B Instruct

Text
Images
Audio
Video

License

Usage and distribution terms

Ministral 3 (8B Instruct 2512) is licensed under Apache 2.0, while Ministral 8B Instruct uses Mistral Research License.

License differences may affect how you can use these models in commercial or open-source projects.

Ministral 3 (8B Instruct 2512)

Apache 2.0

Open weights

Ministral 8B Instruct

Mistral Research License

Open weights

Release Timeline

When each model was launched

Ministral 3 (8B Instruct 2512) was released on 2025-12-04, while Ministral 8B Instruct was released on 2024-10-16.

Ministral 3 (8B Instruct 2512) is 14 months newer than Ministral 8B Instruct.

Ministral 3 (8B Instruct 2512)

Dec 4, 2025

3 months ago

1.1yr newer
Ministral 8B Instruct

Oct 16, 2024

1.5 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Supports multimodal inputs
Higher MATH score (87.6% vs 54.5%)
Larger context window (128,000 tokens)
Higher Arena Hard score (70.9% vs 50.9%)

Detailed Comparison

FAQ

Common questions about Ministral 3 (8B Instruct 2512) vs Ministral 8B Instruct

Both models are evenly matched across the benchmarks. Ministral 3 (8B Instruct 2512) is made by Mistral AI and Ministral 8B Instruct is made by Mistral AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Ministral 3 (8B Instruct 2512) scores MATH: 87.6%, Wild Bench: 66.8%, Arena Hard: 50.9%, MM-MT-Bench: 8.1%. Ministral 8B Instruct scores MT-Bench: 83.0%, Winogrande: 75.3%, ARC-C: 71.9%, Arena Hard: 70.9%, MBPP pass@1: 70.0%.
Ministral 3 (8B Instruct 2512) supports an unknown number of tokens and Ministral 8B Instruct supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include multimodal support (yes vs no), licensing (Apache 2.0 vs Mistral Research License). See the full comparison above for benchmark-by-benchmark results.