GLM-5 vs Qwen3.5-27B Comparison
Comparing GLM-5 and Qwen3.5-27B across benchmarks, pricing, and capabilities.
Performance Benchmarks
Comparative analysis across standard metrics
GLM-5 outperforms in 4 benchmarks (BrowseComp, SWE-Bench Verified, t2-bench, Terminal-Bench 2.0), while Qwen3.5-27B is better at 0 benchmarks.
GLM-5 significantly outperforms across most benchmarks.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
Cost data unavailable.
Model Size
Parameter count comparison
GLM-5 has 717.0B more parameters than Qwen3.5-27B, making it 2655.6% larger.
Context Window
Maximum input and output token capacity
Only GLM-5 specifies input context (200,000 tokens). Only GLM-5 specifies output context (128,000 tokens).
Input Capabilities
Supported data types and modalities
Qwen3.5-27B supports multimodal inputs, whereas GLM-5 does not.
Qwen3.5-27B can handle both text and other forms of data like images, making it suitable for multimodal applications.
GLM-5
Qwen3.5-27B
License
Usage and distribution terms
GLM-5 is licensed under MIT, while Qwen3.5-27B uses Apache 2.0.
License differences may affect how you can use these models in commercial or open-source projects.
MIT
Open weights
Apache 2.0
Open weights
Release Timeline
When each model was launched
GLM-5 was released on 2026-02-11, while Qwen3.5-27B was released on 2026-02-24.
Qwen3.5-27B is 0 month newer than GLM-5.
Feb 11, 2026
1 months ago
Feb 24, 2026
3 weeks ago
1w newerKnowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Outputs Comparison
Key Takeaways
GLM-5
View detailsZhipu AI
Qwen3.5-27B
View detailsAlibaba Cloud / Qwen Team
Detailed Comparison
| Feature |
|---|