Model Comparison

Qwen3.5-397B-A17B vs GLM-4.7-Flash

Qwen3.5-397B-A17B significantly outperforms across most benchmarks. GLM-4.7-Flash is 8.9x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

Qwen3.5-397B-A17B outperforms in 4 benchmarks (BrowseComp, GPQA, Humanity's Last Exam, SWE-Bench Verified), while GLM-4.7-Flash is better at 0 benchmarks.

Qwen3.5-397B-A17B significantly outperforms across most benchmarks.

Fri May 01 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

GLM-4.7-Flash costs less

For input processing, Qwen3.5-397B-A17B ($0.60/1M tokens) is 8.6x more expensive than GLM-4.7-Flash ($0.07/1M tokens).

For output processing, Qwen3.5-397B-A17B ($3.60/1M tokens) is 9.0x more expensive than GLM-4.7-Flash ($0.40/1M tokens).

In conclusion, Qwen3.5-397B-A17B is more expensive than GLM-4.7-Flash.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Fri May 01 2026 • llm-stats.com
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input tokens$0.60
Output tokens$3.60
Best providerNovita
Zhipu AI
GLM-4.7-Flash
Input tokens$0.07
Output tokens$0.40
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

367.0B diff

Qwen3.5-397B-A17B has 367.0B more parameters than GLM-4.7-Flash, making it 1223.3% larger.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
397.0Bparameters
Zhipu AI
GLM-4.7-Flash
30.0Bparameters
397.0B
Qwen3.5-397B-A17B
30.0B
GLM-4.7-Flash

Context Window

Maximum input and output token capacity

Qwen3.5-397B-A17B accepts 262,144 input tokens compared to GLM-4.7-Flash's 128,000 tokens. Qwen3.5-397B-A17B can generate longer responses up to 64,000 tokens, while GLM-4.7-Flash is limited to 16,384 tokens.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input262,144 tokens
Output64,000 tokens
Zhipu AI
GLM-4.7-Flash
Input128,000 tokens
Output16,384 tokens
Fri May 01 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Qwen3.5-397B-A17B supports multimodal inputs, whereas GLM-4.7-Flash does not.

Qwen3.5-397B-A17B can handle both text and other forms of data like images, making it suitable for multimodal applications.

Qwen3.5-397B-A17B

Text
Images
Audio
Video

GLM-4.7-Flash

Text
Images
Audio
Video

License

Usage and distribution terms

Qwen3.5-397B-A17B is licensed under Apache 2.0, while GLM-4.7-Flash uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Qwen3.5-397B-A17B

Apache 2.0

Open weights

GLM-4.7-Flash

MIT

Open weights

Release Timeline

When each model was launched

Qwen3.5-397B-A17B was released on 2026-02-16, while GLM-4.7-Flash was released on 2026-01-19.

Qwen3.5-397B-A17B is 1 month newer than GLM-4.7-Flash.

Qwen3.5-397B-A17B

Feb 16, 2026

2 months ago

4w newer
GLM-4.7-Flash

Jan 19, 2026

3 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Qwen3.5-397B-A17B is available from Novita. GLM-4.7-Flash is available from ZAI.

Qwen3.5-397B-A17B

novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $3.60/1M

GLM-4.7-Flash

z logo
Unknown Organization
Input Price:Input: $0.07/1MOutput Price:Output: $0.40/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Alibaba Cloud / Qwen Team

Qwen3.5-397B-A17B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Supports multimodal inputs
Higher BrowseComp score (69.0% vs 42.8%)
Higher GPQA score (88.4% vs 75.2%)
Higher Humanity's Last Exam score (28.7% vs 14.4%)
Higher SWE-Bench Verified score (76.4% vs 59.2%)
Less expensive input tokens
Less expensive output tokens

Detailed Comparison

AI Model Comparison Table
Feature
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Zhipu AI
GLM-4.7-Flash

FAQ

Common questions about Qwen3.5-397B-A17B vs GLM-4.7-Flash

Qwen3.5-397B-A17B significantly outperforms across most benchmarks. Qwen3.5-397B-A17B is made by Alibaba Cloud / Qwen Team and GLM-4.7-Flash is made by Zhipu AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Qwen3.5-397B-A17B scores MMLU-Redux: 94.9%, HMMT 2025: 94.8%, C-Eval: 93.0%, HMMT25: 92.7%, IFEval: 92.6%. GLM-4.7-Flash scores AIME 2025: 91.6%, Tau-bench: 79.5%, GPQA: 75.2%, SWE-Bench Verified: 59.2%, BrowseComp: 42.8%.
GLM-4.7-Flash is 8.6x cheaper for input tokens. Qwen3.5-397B-A17B costs $0.60/M input and $3.60/M output via novita. GLM-4.7-Flash costs $0.07/M input and $0.40/M output via z.
Qwen3.5-397B-A17B supports 262K tokens and GLM-4.7-Flash supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (262K vs 128K), input pricing ($0.60 vs $0.07/M), multimodal support (yes vs no), licensing (Apache 2.0 vs MIT). See the full comparison above for benchmark-by-benchmark results.
Qwen3.5-397B-A17B is developed by Alibaba Cloud / Qwen Team and GLM-4.7-Flash is developed by Zhipu AI.