Qwen3.5-397B-A17B vs GLM-4.7 Comparison

Comparing Qwen3.5-397B-A17B and GLM-4.7 across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

10 benchmarks

Qwen3.5-397B-A17B outperforms in 7 benchmarks (BrowseComp, BrowseComp-zh, GPQA, MMLU-Pro, SWE-bench Multilingual, SWE-Bench Verified, Terminal-Bench 2.0), while GLM-4.7 is better at 3 benchmarks (Humanity's Last Exam, IMO-AnswerBench, LiveCodeBench v6).

Qwen3.5-397B-A17B shows notably better performance in the majority of benchmarks.

Tue Mar 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

GLM-4.7 costs less

For input processing, Qwen3.5-397B-A17B ($0.60/1M tokens) costs the same as GLM-4.7 ($0.60/1M tokens).

For output processing, Qwen3.5-397B-A17B ($3.60/1M tokens) is 1.6x more expensive than GLM-4.7 ($2.20/1M tokens).

In conclusion, Qwen3.5-397B-A17B is more expensive than GLM-4.7.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Tue Mar 17 2026 • llm-stats.com
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input tokens$0.60
Output tokens$3.60
Best providerNovita
Zhipu AI
GLM-4.7
Input tokens$0.60
Output tokens$2.20
Best providerFireworks
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

39.0B diff

Qwen3.5-397B-A17B has 39.0B more parameters than GLM-4.7, making it 10.9% larger.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
397.0Bparameters
Zhipu AI
GLM-4.7
358.0Bparameters
397.0B
Qwen3.5-397B-A17B
358.0B
GLM-4.7

Context Window

Maximum input and output token capacity

Qwen3.5-397B-A17B accepts 262,144 input tokens compared to GLM-4.7's 202,800 tokens. GLM-4.7 can generate longer responses up to 131,072 tokens, while Qwen3.5-397B-A17B is limited to 64,000 tokens.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input262,144 tokens
Output64,000 tokens
Zhipu AI
GLM-4.7
Input202,800 tokens
Output131,072 tokens
Tue Mar 17 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Qwen3.5-397B-A17B and GLM-4.7 support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Qwen3.5-397B-A17B

Text
Images
Audio
Video

GLM-4.7

Text
Images
Audio
Video

License

Usage and distribution terms

Qwen3.5-397B-A17B is licensed under Apache 2.0, while GLM-4.7 uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Qwen3.5-397B-A17B

Apache 2.0

Open weights

GLM-4.7

MIT

Open weights

Release Timeline

When each model was launched

Qwen3.5-397B-A17B was released on 2026-02-16, while GLM-4.7 was released on 2025-12-22.

Qwen3.5-397B-A17B is 2 months newer than GLM-4.7.

Qwen3.5-397B-A17B

Feb 16, 2026

4 weeks ago

1mo newer
GLM-4.7

Dec 22, 2025

2 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Qwen3.5-397B-A17B is available from Novita. GLM-4.7 is available from Fireworks, Novita. The availability of providers can affect quality of the model and reliability.

Qwen3.5-397B-A17B

novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $3.60/1M

GLM-4.7

fireworks logo
Fireworks
Input Price:Input: $0.60/1MOutput Price:Output: $2.20/1M
novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $2.20/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Alibaba Cloud / Qwen Team

Qwen3.5-397B-A17B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Higher BrowseComp score (69.0% vs 52.0%)
Higher BrowseComp-zh score (70.3% vs 66.6%)
Higher GPQA score (88.4% vs 85.7%)
Higher MMLU-Pro score (87.8% vs 84.3%)
Higher SWE-bench Multilingual score (69.3% vs 66.7%)
Higher SWE-Bench Verified score (76.4% vs 73.8%)
Higher Terminal-Bench 2.0 score (52.5% vs 41.0%)
Less expensive output tokens
Higher Humanity's Last Exam score (42.8% vs 28.7%)
Higher IMO-AnswerBench score (82.0% vs 80.9%)
Higher LiveCodeBench v6 score (84.9% vs 83.6%)

Detailed Comparison

AI Model Comparison Table
Feature
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Zhipu AI
GLM-4.7