Qwen3.5-397B-A17B vs GLM-4.7 Comparison
Comparing Qwen3.5-397B-A17B and GLM-4.7 across benchmarks, pricing, and capabilities.
Performance Benchmarks
Comparative analysis across standard metrics
Qwen3.5-397B-A17B outperforms in 7 benchmarks (BrowseComp, BrowseComp-zh, GPQA, MMLU-Pro, SWE-bench Multilingual, SWE-Bench Verified, Terminal-Bench 2.0), while GLM-4.7 is better at 3 benchmarks (Humanity's Last Exam, IMO-AnswerBench, LiveCodeBench v6).
Qwen3.5-397B-A17B shows notably better performance in the majority of benchmarks.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
For input processing, Qwen3.5-397B-A17B ($0.60/1M tokens) costs the same as GLM-4.7 ($0.60/1M tokens).
For output processing, Qwen3.5-397B-A17B ($3.60/1M tokens) is 1.6x more expensive than GLM-4.7 ($2.20/1M tokens).
In conclusion, Qwen3.5-397B-A17B is more expensive than GLM-4.7.*
* Using a 3:1 ratio of input to output tokens
Model Size
Parameter count comparison
Qwen3.5-397B-A17B has 39.0B more parameters than GLM-4.7, making it 10.9% larger.
Context Window
Maximum input and output token capacity
Qwen3.5-397B-A17B accepts 262,144 input tokens compared to GLM-4.7's 202,800 tokens. GLM-4.7 can generate longer responses up to 131,072 tokens, while Qwen3.5-397B-A17B is limited to 64,000 tokens.
Input Capabilities
Supported data types and modalities
Both Qwen3.5-397B-A17B and GLM-4.7 support multimodal inputs.
They are both capable of processing various types of data, offering versatility in application.
Qwen3.5-397B-A17B
GLM-4.7
License
Usage and distribution terms
Qwen3.5-397B-A17B is licensed under Apache 2.0, while GLM-4.7 uses MIT.
License differences may affect how you can use these models in commercial or open-source projects.
Apache 2.0
Open weights
MIT
Open weights
Release Timeline
When each model was launched
Qwen3.5-397B-A17B was released on 2026-02-16, while GLM-4.7 was released on 2025-12-22.
Qwen3.5-397B-A17B is 2 months newer than GLM-4.7.
Feb 16, 2026
4 weeks ago
1mo newerDec 22, 2025
2 months ago
Knowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Provider Availability
Qwen3.5-397B-A17B is available from Novita. GLM-4.7 is available from Fireworks, Novita. The availability of providers can affect quality of the model and reliability.
Qwen3.5-397B-A17B
GLM-4.7
Outputs Comparison
Key Takeaways
Qwen3.5-397B-A17B
View detailsAlibaba Cloud / Qwen Team
GLM-4.7
View detailsZhipu AI
Detailed Comparison
| Feature |
|---|