Model Comparison

Mistral Small 3.2 24B Instruct vs Qwen2.5-Omni-7B

Mistral Small 3.2 24B Instruct shows notably better performance in the majority of benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

8 benchmarks

Mistral Small 3.2 24B Instruct outperforms in 5 benchmarks (AI2D, ChartQA, GPQA, MMLU-Pro, MMMU), while Qwen2.5-Omni-7B is better at 3 benchmarks (DocVQA, MATH, MathVista).

Mistral Small 3.2 24B Instruct shows notably better performance in the majority of benchmarks.

Wed Apr 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 15 2026 • llm-stats.com
Mistral AI
Mistral Small 3.2 24B Instruct
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Alibaba Cloud / Qwen Team
Qwen2.5-Omni-7B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

16.6B diff

Mistral Small 3.2 24B Instruct has 16.6B more parameters than Qwen2.5-Omni-7B, making it 237.1% larger.

Mistral AI
Mistral Small 3.2 24B Instruct
23.6Bparameters
Alibaba Cloud / Qwen Team
Qwen2.5-Omni-7B
7.0Bparameters
23.6B
Mistral Small 3.2 24B Instruct
7.0B
Qwen2.5-Omni-7B

Input Capabilities

Supported data types and modalities

Both Mistral Small 3.2 24B Instruct and Qwen2.5-Omni-7B support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Mistral Small 3.2 24B Instruct

Text
Images
Audio
Video

Qwen2.5-Omni-7B

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under Apache 2.0.

Both models share the same licensing terms, providing consistent usage rights.

Mistral Small 3.2 24B Instruct

Apache 2.0

Open weights

Qwen2.5-Omni-7B

Apache 2.0

Open weights

Release Timeline

When each model was launched

Mistral Small 3.2 24B Instruct was released on 2025-06-20, while Qwen2.5-Omni-7B was released on 2025-03-27.

Mistral Small 3.2 24B Instruct is 3 months newer than Qwen2.5-Omni-7B.

Mistral Small 3.2 24B Instruct

Jun 20, 2025

9 months ago

2mo newer
Qwen2.5-Omni-7B

Mar 27, 2025

1.1 years ago

Knowledge Cutoff

When training data ends

Mistral Small 3.2 24B Instruct has a documented knowledge cutoff of 2023-10-01, while Qwen2.5-Omni-7B's cutoff date is not specified.

We can confirm Mistral Small 3.2 24B Instruct's training data extends to 2023-10-01, but cannot make a direct comparison without Qwen2.5-Omni-7B's cutoff date.

Mistral Small 3.2 24B Instruct

Oct 2023

Qwen2.5-Omni-7B

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher AI2D score (92.9% vs 83.2%)
Higher ChartQA score (87.4% vs 85.3%)
Higher GPQA score (46.1% vs 30.8%)
Higher MMLU-Pro score (69.1% vs 47.0%)
Higher MMMU score (62.5% vs 59.2%)
Alibaba Cloud / Qwen Team

Qwen2.5-Omni-7B

View details

Alibaba Cloud / Qwen Team

Higher DocVQA score (95.2% vs 94.9%)
Higher MATH score (71.5% vs 69.4%)
Higher MathVista score (67.9% vs 67.1%)

Detailed Comparison

AI Model Comparison Table
Feature
Mistral AI
Mistral Small 3.2 24B Instruct
Alibaba Cloud / Qwen Team
Qwen2.5-Omni-7B

FAQ

Common questions about Mistral Small 3.2 24B Instruct vs Qwen2.5-Omni-7B

Mistral Small 3.2 24B Instruct shows notably better performance in the majority of benchmarks. Mistral Small 3.2 24B Instruct is made by Mistral AI and Qwen2.5-Omni-7B is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Mistral Small 3.2 24B Instruct scores DocVQA: 94.9%, AI2D: 92.9%, HumanEval Plus: 92.9%, ChartQA: 87.4%, IF: 84.8%. Qwen2.5-Omni-7B scores DocVQA: 95.2%, VocalSound: 93.9%, GSM8k: 88.7%, GiantSteps Tempo: 88.0%, ChartQA: 85.3%.
Mistral Small 3.2 24B Instruct is developed by Mistral AI and Qwen2.5-Omni-7B is developed by Alibaba Cloud / Qwen Team.