GLM-4.7 vs Qwen3.5-397B-A17B Comparison

Comparing GLM-4.7 and Qwen3.5-397B-A17B across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

10 benchmarks

GLM-4.7 outperforms in 3 benchmarks (Humanity's Last Exam, IMO-AnswerBench, LiveCodeBench v6), while Qwen3.5-397B-A17B is better at 7 benchmarks (BrowseComp, BrowseComp-zh, GPQA, MMLU-Pro, SWE-bench Multilingual, SWE-Bench Verified, Terminal-Bench 2.0).

Qwen3.5-397B-A17B shows notably better performance in the majority of benchmarks.

Mon Mar 16 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

GLM-4.7 costs less

For input processing, GLM-4.7 ($0.60/1M tokens) costs the same as Qwen3.5-397B-A17B ($0.60/1M tokens).

For output processing, GLM-4.7 ($2.20/1M tokens) is 1.6x cheaper than Qwen3.5-397B-A17B ($3.60/1M tokens).

In conclusion, Qwen3.5-397B-A17B is more expensive than GLM-4.7.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Mon Mar 16 2026 • llm-stats.com
Zhipu AI
GLM-4.7
Input tokens$0.60
Output tokens$2.20
Best providerFireworks
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input tokens$0.60
Output tokens$3.60
Best providerNovita
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

39.0B diff

Qwen3.5-397B-A17B has 39.0B more parameters than GLM-4.7, making it 10.9% larger.

Zhipu AI
GLM-4.7
358.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
397.0Bparameters
358.0B
GLM-4.7
397.0B
Qwen3.5-397B-A17B

Context Window

Maximum input and output token capacity

Qwen3.5-397B-A17B accepts 262,144 input tokens compared to GLM-4.7's 202,800 tokens. GLM-4.7 can generate longer responses up to 131,072 tokens, while Qwen3.5-397B-A17B is limited to 64,000 tokens.

Zhipu AI
GLM-4.7
Input202,800 tokens
Output131,072 tokens
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input262,144 tokens
Output64,000 tokens
Mon Mar 16 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both GLM-4.7 and Qwen3.5-397B-A17B support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

GLM-4.7

Text
Images
Audio
Video

Qwen3.5-397B-A17B

Text
Images
Audio
Video

License

Usage and distribution terms

GLM-4.7 is licensed under MIT, while Qwen3.5-397B-A17B uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

GLM-4.7

MIT

Open weights

Qwen3.5-397B-A17B

Apache 2.0

Open weights

Release Timeline

When each model was launched

GLM-4.7 was released on 2025-12-22, while Qwen3.5-397B-A17B was released on 2026-02-16.

Qwen3.5-397B-A17B is 2 months newer than GLM-4.7.

GLM-4.7

Dec 22, 2025

2 months ago

Qwen3.5-397B-A17B

Feb 16, 2026

4 weeks ago

1mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

GLM-4.7 is available from Fireworks, Novita. Qwen3.5-397B-A17B is available from Novita. The availability of providers can affect quality of the model and reliability.

GLM-4.7

fireworks logo
Fireworks
Input Price:Input: $0.60/1MOutput Price:Output: $2.20/1M
novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $2.20/1M

Qwen3.5-397B-A17B

novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $3.60/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Less expensive output tokens
Higher Humanity's Last Exam score (42.8% vs 28.7%)
Higher IMO-AnswerBench score (82.0% vs 80.9%)
Higher LiveCodeBench v6 score (84.9% vs 83.6%)
Alibaba Cloud / Qwen Team

Qwen3.5-397B-A17B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Higher BrowseComp score (69.0% vs 52.0%)
Higher BrowseComp-zh score (70.3% vs 66.6%)
Higher GPQA score (88.4% vs 85.7%)
Higher MMLU-Pro score (87.8% vs 84.3%)
Higher SWE-bench Multilingual score (69.3% vs 66.7%)
Higher SWE-Bench Verified score (76.4% vs 73.8%)
Higher Terminal-Bench 2.0 score (52.5% vs 41.0%)

Detailed Comparison

AI Model Comparison Table
Feature
Zhipu AI
GLM-4.7
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B