Model Comparison

Gemma 3n E2B Instructed vs Ministral 3 (14B Reasoning 2512)

Ministral 3 (14B Reasoning 2512) significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

Gemma 3n E2B Instructed outperforms in 0 benchmarks, while Ministral 3 (14B Reasoning 2512) is better at 3 benchmarks (AIME 2025, GPQA, LiveCodeBench).

Ministral 3 (14B Reasoning 2512) significantly outperforms across most benchmarks.

Wed Apr 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 15 2026 • llm-stats.com
Google
Gemma 3n E2B Instructed
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Mistral AI
Ministral 3 (14B Reasoning 2512)
Input tokens$0.20
Output tokens$0.20
Best providerMistral
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

6.0B diff

Ministral 3 (14B Reasoning 2512) has 6.0B more parameters than Gemma 3n E2B Instructed, making it 75.0% larger.

Google
Gemma 3n E2B Instructed
8.0Bparameters
Mistral AI
Ministral 3 (14B Reasoning 2512)
14.0Bparameters
8.0B
Gemma 3n E2B Instructed
14.0B
Ministral 3 (14B Reasoning 2512)

Context Window

Maximum input and output token capacity

Only Ministral 3 (14B Reasoning 2512) specifies input context (262,100 tokens). Only Ministral 3 (14B Reasoning 2512) specifies output context (262,100 tokens).

Google
Gemma 3n E2B Instructed
Input- tokens
Output- tokens
Mistral AI
Ministral 3 (14B Reasoning 2512)
Input262,100 tokens
Output262,100 tokens
Wed Apr 15 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Gemma 3n E2B Instructed and Ministral 3 (14B Reasoning 2512) support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Gemma 3n E2B Instructed

Text
Images
Audio
Video

Ministral 3 (14B Reasoning 2512)

Text
Images
Audio
Video

License

Usage and distribution terms

Gemma 3n E2B Instructed is licensed under a proprietary license, while Ministral 3 (14B Reasoning 2512) uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Gemma 3n E2B Instructed

Proprietary

Closed source

Ministral 3 (14B Reasoning 2512)

Apache 2.0

Open weights

Release Timeline

When each model was launched

Gemma 3n E2B Instructed was released on 2025-06-26, while Ministral 3 (14B Reasoning 2512) was released on 2025-12-04.

Ministral 3 (14B Reasoning 2512) is 5 months newer than Gemma 3n E2B Instructed.

Gemma 3n E2B Instructed

Jun 26, 2025

9 months ago

Ministral 3 (14B Reasoning 2512)

Dec 4, 2025

4 months ago

5mo newer

Knowledge Cutoff

When training data ends

Gemma 3n E2B Instructed has a documented knowledge cutoff of 2024-06-01, while Ministral 3 (14B Reasoning 2512)'s cutoff date is not specified.

We can confirm Gemma 3n E2B Instructed's training data extends to 2024-06-01, but cannot make a direct comparison without Ministral 3 (14B Reasoning 2512)'s cutoff date.

Gemma 3n E2B Instructed

Jun 2024

Ministral 3 (14B Reasoning 2512)

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (262,100 tokens)
Has open weights
Higher AIME 2025 score (85.0% vs 6.7%)
Higher GPQA score (71.2% vs 24.8%)
Higher LiveCodeBench score (64.6% vs 13.2%)

Detailed Comparison

FAQ

Common questions about Gemma 3n E2B Instructed vs Ministral 3 (14B Reasoning 2512)

Ministral 3 (14B Reasoning 2512) significantly outperforms across most benchmarks. Gemma 3n E2B Instructed is made by Google and Ministral 3 (14B Reasoning 2512) is made by Mistral AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Gemma 3n E2B Instructed scores HumanEval: 66.5%, MMLU: 60.1%, Global-MMLU-Lite: 59.0%, MBPP: 56.6%, Global-MMLU: 55.1%. Ministral 3 (14B Reasoning 2512) scores AIME 2024: 89.8%, AIME 2025: 85.0%, GPQA: 71.2%, LiveCodeBench: 64.6%.
Gemma 3n E2B Instructed supports an unknown number of tokens and Ministral 3 (14B Reasoning 2512) supports 262K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (Proprietary vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
Gemma 3n E2B Instructed is developed by Google and Ministral 3 (14B Reasoning 2512) is developed by Mistral AI.