Qwen3.5-397B-A17B vs GLM-4.7-Flash Comparison
Comparing Qwen3.5-397B-A17B and GLM-4.7-Flash across benchmarks, pricing, and capabilities.
Performance Benchmarks
Comparative analysis across standard metrics
Qwen3.5-397B-A17B outperforms in 4 benchmarks (BrowseComp, GPQA, Humanity's Last Exam, SWE-Bench Verified), while GLM-4.7-Flash is better at 0 benchmarks.
Qwen3.5-397B-A17B significantly outperforms across most benchmarks.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
For input processing, Qwen3.5-397B-A17B ($0.60/1M tokens) is 8.6x more expensive than GLM-4.7-Flash ($0.07/1M tokens).
For output processing, Qwen3.5-397B-A17B ($3.60/1M tokens) is 9.0x more expensive than GLM-4.7-Flash ($0.40/1M tokens).
In conclusion, Qwen3.5-397B-A17B is more expensive than GLM-4.7-Flash.*
* Using a 3:1 ratio of input to output tokens
Model Size
Parameter count comparison
Qwen3.5-397B-A17B has 367.0B more parameters than GLM-4.7-Flash, making it 1223.3% larger.
Context Window
Maximum input and output token capacity
Qwen3.5-397B-A17B accepts 262,144 input tokens compared to GLM-4.7-Flash's 128,000 tokens. Qwen3.5-397B-A17B can generate longer responses up to 64,000 tokens, while GLM-4.7-Flash is limited to 16,384 tokens.
Input Capabilities
Supported data types and modalities
Qwen3.5-397B-A17B supports multimodal inputs, whereas GLM-4.7-Flash does not.
Qwen3.5-397B-A17B can handle both text and other forms of data like images, making it suitable for multimodal applications.
Qwen3.5-397B-A17B
GLM-4.7-Flash
License
Usage and distribution terms
Qwen3.5-397B-A17B is licensed under Apache 2.0, while GLM-4.7-Flash uses MIT.
License differences may affect how you can use these models in commercial or open-source projects.
Apache 2.0
Open weights
MIT
Open weights
Release Timeline
When each model was launched
Qwen3.5-397B-A17B was released on 2026-02-16, while GLM-4.7-Flash was released on 2026-01-19.
Qwen3.5-397B-A17B is 1 month newer than GLM-4.7-Flash.
Feb 16, 2026
4 weeks ago
4w newerJan 19, 2026
1 months ago
Knowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Provider Availability
Qwen3.5-397B-A17B is available from Novita. GLM-4.7-Flash is available from ZAI. The availability of providers can affect quality of the model and reliability.
Qwen3.5-397B-A17B
GLM-4.7-Flash
Outputs Comparison
Key Takeaways
Qwen3.5-397B-A17B
View detailsAlibaba Cloud / Qwen Team
GLM-4.7-Flash
View detailsZhipu AI
Detailed Comparison
| Feature |
|---|