Qwen3.5-397B-A17B vs Qwen3.5-122B-A10B Comparison

Comparing Qwen3.5-397B-A17B and Qwen3.5-122B-A10B across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

32 benchmarks

Qwen3.5-397B-A17B outperforms in 30 benchmarks (AA-LCR, BFCL-V4, BrowseComp, BrowseComp-zh, C-Eval, DeepPlanning, Global PIQA, GPQA, HMMT 2025, HMMT25, IFBench, Include, LiveCodeBench v6, LongBench v2, MAXIFE, MMLU-Pro, MMLU-ProX, MMLU-Redux, MMMLU, Multi-Challenge, NOVA-63, PolyMATH, Seal-0, SuperGPQA, SWE-Bench Verified, t2-bench, Terminal-Bench 2.0, VITA-Bench, WideSearch, WMT24++), while Qwen3.5-122B-A10B is better at 2 benchmarks (Humanity's Last Exam, IFEval).

Qwen3.5-397B-A17B significantly outperforms across most benchmarks.

Tue Mar 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Qwen3.5-122B-A10B costs less

For input processing, Qwen3.5-397B-A17B ($0.60/1M tokens) is 1.5x more expensive than Qwen3.5-122B-A10B ($0.40/1M tokens).

For output processing, Qwen3.5-397B-A17B ($3.60/1M tokens) is 1.1x more expensive than Qwen3.5-122B-A10B ($3.20/1M tokens).

In conclusion, Qwen3.5-397B-A17B is more expensive than Qwen3.5-122B-A10B.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Tue Mar 17 2026 • llm-stats.com
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input tokens$0.60
Output tokens$3.60
Best providerNovita
Alibaba Cloud / Qwen Team
Qwen3.5-122B-A10B
Input tokens$0.40
Output tokens$3.20
Best providerNovita
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

275.0B diff

Qwen3.5-397B-A17B has 275.0B more parameters than Qwen3.5-122B-A10B, making it 225.4% larger.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
397.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3.5-122B-A10B
122.0Bparameters
397.0B
Qwen3.5-397B-A17B
122.0B
Qwen3.5-122B-A10B

Context Window

Maximum input and output token capacity

Both models have the same input context window of 262,144 tokens. Both models can generate responses up to 64,000 tokens.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input262,144 tokens
Output64,000 tokens
Alibaba Cloud / Qwen Team
Qwen3.5-122B-A10B
Input262,144 tokens
Output64,000 tokens
Tue Mar 17 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Qwen3.5-397B-A17B and Qwen3.5-122B-A10B support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Qwen3.5-397B-A17B

Text
Images
Audio
Video

Qwen3.5-122B-A10B

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under Apache 2.0.

Both models share the same licensing terms, providing consistent usage rights.

Qwen3.5-397B-A17B

Apache 2.0

Open weights

Qwen3.5-122B-A10B

Apache 2.0

Open weights

Release Timeline

When each model was launched

Qwen3.5-397B-A17B was released on 2026-02-16, while Qwen3.5-122B-A10B was released on 2026-02-24.

Qwen3.5-122B-A10B is 0 month newer than Qwen3.5-397B-A17B.

Qwen3.5-397B-A17B

Feb 16, 2026

4 weeks ago

Qwen3.5-122B-A10B

Feb 24, 2026

3 weeks ago

1w newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Qwen3.5-397B-A17B is available from Novita. Qwen3.5-122B-A10B is available from Novita. The availability of providers can affect quality of the model and reliability.

Qwen3.5-397B-A17B

novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $3.60/1M

Qwen3.5-122B-A10B

novita logo
Novita
Input Price:Input: $0.40/1MOutput Price:Output: $3.20/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Alibaba Cloud / Qwen Team

Qwen3.5-397B-A17B

View details

Alibaba Cloud / Qwen Team

Higher AA-LCR score (68.7% vs 66.9%)
Higher BFCL-V4 score (72.9% vs 72.2%)
Higher BrowseComp score (69.0% vs 63.8%)
Higher BrowseComp-zh score (70.3% vs 69.9%)
Higher C-Eval score (93.0% vs 91.9%)
Higher DeepPlanning score (34.3% vs 24.1%)
Higher Global PIQA score (89.8% vs 88.4%)
Higher GPQA score (88.4% vs 86.6%)
Higher HMMT 2025 score (94.8% vs 91.4%)
Higher HMMT25 score (92.7% vs 90.3%)
Higher IFBench score (76.5% vs 76.1%)
Higher Include score (85.6% vs 82.8%)
Higher LiveCodeBench v6 score (83.6% vs 78.9%)
Higher LongBench v2 score (63.2% vs 60.2%)
Higher MAXIFE score (88.2% vs 87.9%)
Higher MMLU-Pro score (87.8% vs 86.7%)
Higher MMLU-ProX score (84.7% vs 82.2%)
Higher MMLU-Redux score (94.9% vs 94.0%)
Higher MMMLU score (88.5% vs 86.7%)
Higher Multi-Challenge score (67.6% vs 61.5%)
Higher NOVA-63 score (59.1% vs 58.6%)
Higher PolyMATH score (73.3% vs 68.9%)
Higher Seal-0 score (46.9% vs 44.1%)
Higher SuperGPQA score (70.4% vs 67.1%)
Higher SWE-Bench Verified score (76.4% vs 72.0%)
Higher t2-bench score (86.7% vs 79.5%)
Higher Terminal-Bench 2.0 score (52.5% vs 49.4%)
Higher VITA-Bench score (49.7% vs 33.6%)
Higher WideSearch score (74.0% vs 60.5%)
Higher WMT24++ score (78.9% vs 78.3%)
Alibaba Cloud / Qwen Team

Qwen3.5-122B-A10B

View details

Alibaba Cloud / Qwen Team

Less expensive input tokens
Less expensive output tokens
Higher Humanity's Last Exam score (47.5% vs 28.7%)
Higher IFEval score (93.4% vs 92.6%)

Detailed Comparison

AI Model Comparison Table
Feature
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Alibaba Cloud / Qwen Team
Qwen3.5-122B-A10B