Qwen3.5-397B-A17B vs Qwen3.5-27B Comparison

Comparing Qwen3.5-397B-A17B and Qwen3.5-27B across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

32 benchmarks

Qwen3.5-397B-A17B outperforms in 28 benchmarks (AA-LCR, BFCL-V4, BrowseComp, BrowseComp-zh, C-Eval, DeepPlanning, Global PIQA, GPQA, HMMT 2025, HMMT25, Include, LiveCodeBench v6, LongBench v2, MAXIFE, MMLU-Pro, MMLU-ProX, MMLU-Redux, MMMLU, Multi-Challenge, NOVA-63, PolyMATH, SuperGPQA, SWE-Bench Verified, t2-bench, Terminal-Bench 2.0, VITA-Bench, WideSearch, WMT24++), while Qwen3.5-27B is better at 3 benchmarks (Humanity's Last Exam, IFEval, Seal-0).

Qwen3.5-397B-A17B significantly outperforms across most benchmarks.

Tue Mar 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Tue Mar 17 2026 • llm-stats.com
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input tokens$0.60
Output tokens$3.60
Best providerNovita
Alibaba Cloud / Qwen Team
Qwen3.5-27B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

370.0B diff

Qwen3.5-397B-A17B has 370.0B more parameters than Qwen3.5-27B, making it 1370.4% larger.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
397.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3.5-27B
27.0Bparameters
397.0B
Qwen3.5-397B-A17B
27.0B
Qwen3.5-27B

Context Window

Maximum input and output token capacity

Only Qwen3.5-397B-A17B specifies input context (262,144 tokens). Only Qwen3.5-397B-A17B specifies output context (64,000 tokens).

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input262,144 tokens
Output64,000 tokens
Alibaba Cloud / Qwen Team
Qwen3.5-27B
Input- tokens
Output- tokens
Tue Mar 17 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Qwen3.5-397B-A17B and Qwen3.5-27B support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Qwen3.5-397B-A17B

Text
Images
Audio
Video

Qwen3.5-27B

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under Apache 2.0.

Both models share the same licensing terms, providing consistent usage rights.

Qwen3.5-397B-A17B

Apache 2.0

Open weights

Qwen3.5-27B

Apache 2.0

Open weights

Release Timeline

When each model was launched

Qwen3.5-397B-A17B was released on 2026-02-16, while Qwen3.5-27B was released on 2026-02-24.

Qwen3.5-27B is 0 month newer than Qwen3.5-397B-A17B.

Qwen3.5-397B-A17B

Feb 16, 2026

4 weeks ago

Qwen3.5-27B

Feb 24, 2026

3 weeks ago

1w newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Alibaba Cloud / Qwen Team

Qwen3.5-397B-A17B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Higher AA-LCR score (68.7% vs 66.1%)
Higher BFCL-V4 score (72.9% vs 68.5%)
Higher BrowseComp score (69.0% vs 61.0%)
Higher BrowseComp-zh score (70.3% vs 62.1%)
Higher C-Eval score (93.0% vs 90.5%)
Higher DeepPlanning score (34.3% vs 22.6%)
Higher Global PIQA score (89.8% vs 87.5%)
Higher GPQA score (88.4% vs 85.5%)
Higher HMMT 2025 score (94.8% vs 92.0%)
Higher HMMT25 score (92.7% vs 89.8%)
Higher Include score (85.6% vs 81.6%)
Higher LiveCodeBench v6 score (83.6% vs 80.7%)
Higher LongBench v2 score (63.2% vs 60.6%)
Higher MAXIFE score (88.2% vs 88.0%)
Higher MMLU-Pro score (87.8% vs 86.1%)
Higher MMLU-ProX score (84.7% vs 82.2%)
Higher MMLU-Redux score (94.9% vs 93.2%)
Higher MMMLU score (88.5% vs 85.9%)
Higher Multi-Challenge score (67.6% vs 60.8%)
Higher NOVA-63 score (59.1% vs 58.1%)
Higher PolyMATH score (73.3% vs 71.2%)
Higher SuperGPQA score (70.4% vs 65.6%)
Higher SWE-Bench Verified score (76.4% vs 72.4%)
Higher t2-bench score (86.7% vs 79.0%)
Higher Terminal-Bench 2.0 score (52.5% vs 41.6%)
Higher VITA-Bench score (49.7% vs 41.9%)
Higher WideSearch score (74.0% vs 61.1%)
Higher WMT24++ score (78.9% vs 77.6%)
Alibaba Cloud / Qwen Team

Qwen3.5-27B

View details

Alibaba Cloud / Qwen Team

Higher Humanity's Last Exam score (48.5% vs 28.7%)
Higher IFEval score (95.0% vs 92.6%)
Higher Seal-0 score (47.2% vs 46.9%)

Detailed Comparison

AI Model Comparison Table
Feature
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Alibaba Cloud / Qwen Team
Qwen3.5-27B