Qwen3.5-397B-A17B vs MiMo-V2-Flash Comparison

Comparing Qwen3.5-397B-A17B and MiMo-V2-Flash across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

10 benchmarks

Qwen3.5-397B-A17B outperforms in 9 benchmarks (BrowseComp, GPQA, HMMT 2025, Humanity's Last Exam, LiveCodeBench v6, LongBench v2, MMLU-Pro, SWE-Bench Verified, Terminal-Bench 2.0), while MiMo-V2-Flash is better at 1 benchmark (SWE-bench Multilingual).

Qwen3.5-397B-A17B significantly outperforms across most benchmarks.

Tue Mar 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

MiMo-V2-Flash costs less

For input processing, Qwen3.5-397B-A17B ($0.60/1M tokens) is 6.0x more expensive than MiMo-V2-Flash ($0.10/1M tokens).

For output processing, Qwen3.5-397B-A17B ($3.60/1M tokens) is 12.0x more expensive than MiMo-V2-Flash ($0.30/1M tokens).

In conclusion, Qwen3.5-397B-A17B is more expensive than MiMo-V2-Flash.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Tue Mar 17 2026 • llm-stats.com
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input tokens$0.60
Output tokens$3.60
Best providerNovita
Xiaomi
MiMo-V2-Flash
Input tokens$0.10
Output tokens$0.30
Best providerXiaomi
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

88.0B diff

Qwen3.5-397B-A17B has 88.0B more parameters than MiMo-V2-Flash, making it 28.5% larger.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
397.0Bparameters
Xiaomi
MiMo-V2-Flash
309.0Bparameters
397.0B
Qwen3.5-397B-A17B
309.0B
MiMo-V2-Flash

Context Window

Maximum input and output token capacity

Qwen3.5-397B-A17B accepts 262,144 input tokens compared to MiMo-V2-Flash's 256,000 tokens. Qwen3.5-397B-A17B can generate longer responses up to 64,000 tokens, while MiMo-V2-Flash is limited to 16,384 tokens.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input262,144 tokens
Output64,000 tokens
Xiaomi
MiMo-V2-Flash
Input256,000 tokens
Output16,384 tokens
Tue Mar 17 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Qwen3.5-397B-A17B supports multimodal inputs, whereas MiMo-V2-Flash does not.

Qwen3.5-397B-A17B can handle both text and other forms of data like images, making it suitable for multimodal applications.

Qwen3.5-397B-A17B

Text
Images
Audio
Video

MiMo-V2-Flash

Text
Images
Audio
Video

License

Usage and distribution terms

Qwen3.5-397B-A17B is licensed under Apache 2.0, while MiMo-V2-Flash uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Qwen3.5-397B-A17B

Apache 2.0

Open weights

MiMo-V2-Flash

MIT

Open weights

Release Timeline

When each model was launched

Qwen3.5-397B-A17B was released on 2026-02-16, while MiMo-V2-Flash was released on 2025-12-16.

Qwen3.5-397B-A17B is 2 months newer than MiMo-V2-Flash.

Qwen3.5-397B-A17B

Feb 16, 2026

4 weeks ago

2mo newer
MiMo-V2-Flash

Dec 16, 2025

3 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Qwen3.5-397B-A17B is available from Novita. MiMo-V2-Flash is available from Xiaomi. The availability of providers can affect quality of the model and reliability.

Qwen3.5-397B-A17B

novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $3.60/1M

MiMo-V2-Flash

xiaomi logo
Xiaomi
Input Price:Input: $0.10/1MOutput Price:Output: $0.30/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Alibaba Cloud / Qwen Team

Qwen3.5-397B-A17B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Supports multimodal inputs
Higher BrowseComp score (69.0% vs 58.3%)
Higher GPQA score (88.4% vs 83.7%)
Higher HMMT 2025 score (94.8% vs 84.4%)
Higher Humanity's Last Exam score (28.7% vs 22.1%)
Higher LiveCodeBench v6 score (83.6% vs 80.6%)
Higher LongBench v2 score (63.2% vs 60.6%)
Higher MMLU-Pro score (87.8% vs 84.9%)
Higher SWE-Bench Verified score (76.4% vs 73.4%)
Higher Terminal-Bench 2.0 score (52.5% vs 38.5%)
Less expensive input tokens
Less expensive output tokens
Higher SWE-bench Multilingual score (71.7% vs 69.3%)

Detailed Comparison

AI Model Comparison Table
Feature
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Xiaomi
MiMo-V2-Flash