Model Comparison
GLM-4.7 vs Qwen3-235B-A22B-Thinking-2507
GLM-4.7 significantly outperforms across most benchmarks. Qwen3-235B-A22B-Thinking-2507 is 1.0x cheaper per token.
Performance Benchmarks
Comparative analysis across standard metrics
GLM-4.7 outperforms in 4 benchmarks (AIME 2025, GPQA, Humanity's Last Exam, LiveCodeBench v6), while Qwen3-235B-A22B-Thinking-2507 is better at 1 benchmark (MMLU-Pro).
GLM-4.7 significantly outperforms across most benchmarks.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
For input processing, GLM-4.7 ($0.60/1M tokens) is 2.0x more expensive than Qwen3-235B-A22B-Thinking-2507 ($0.30/1M tokens).
For output processing, GLM-4.7 ($2.20/1M tokens) is 1.4x cheaper than Qwen3-235B-A22B-Thinking-2507 ($3.00/1M tokens).
In conclusion, GLM-4.7 is more expensive than Qwen3-235B-A22B-Thinking-2507.*
* Using a 3:1 ratio of input to output tokens
Model Size
Parameter count comparison
GLM-4.7 has 123.0B more parameters than Qwen3-235B-A22B-Thinking-2507, making it 52.3% larger.
Context Window
Maximum input and output token capacity
Qwen3-235B-A22B-Thinking-2507 accepts 262,144 input tokens compared to GLM-4.7's 202,800 tokens. Both models can generate responses up to 131,072 tokens.
Input Capabilities
Supported data types and modalities
GLM-4.7 supports multimodal inputs, whereas Qwen3-235B-A22B-Thinking-2507 does not.
GLM-4.7 can handle both text and other forms of data like images, making it suitable for multimodal applications.
GLM-4.7
Qwen3-235B-A22B-Thinking-2507
License
Usage and distribution terms
GLM-4.7 is licensed under MIT, while Qwen3-235B-A22B-Thinking-2507 uses Apache 2.0.
License differences may affect how you can use these models in commercial or open-source projects.
MIT
Open weights
Apache 2.0
Open weights
Release Timeline
When each model was launched
GLM-4.7 was released on 2025-12-22, while Qwen3-235B-A22B-Thinking-2507 was released on 2025-07-25.
GLM-4.7 is 5 months newer than Qwen3-235B-A22B-Thinking-2507.
Dec 22, 2025
4 months ago
5mo newerJul 25, 2025
9 months ago
Knowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Provider Availability
GLM-4.7 is available from Fireworks, Novita. Qwen3-235B-A22B-Thinking-2507 is available from Fireworks, Novita.
GLM-4.7
Qwen3-235B-A22B-Thinking-2507
Outputs Comparison
Key Takeaways
GLM-4.7
View detailsZhipu AI
Qwen3-235B-A22B-Thinking-2507
View detailsAlibaba Cloud / Qwen Team
Detailed Comparison
| Feature |
|---|
FAQ
Common questions about GLM-4.7 vs Qwen3-235B-A22B-Thinking-2507.