Model Comparison

Kimi K2.5 vs GLM-4.7-Flash

Kimi K2.5 significantly outperforms across most benchmarks. GLM-4.7-Flash is 7.9x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

5 benchmarks

Kimi K2.5 outperforms in 5 benchmarks (AIME 2025, BrowseComp, GPQA, Humanity's Last Exam, SWE-Bench Verified), while GLM-4.7-Flash is better at 0 benchmarks.

Kimi K2.5 significantly outperforms across most benchmarks.

Fri May 01 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

GLM-4.7-Flash costs less

For input processing, Kimi K2.5 ($0.60/1M tokens) is 8.6x more expensive than GLM-4.7-Flash ($0.07/1M tokens).

For output processing, Kimi K2.5 ($3.00/1M tokens) is 7.5x more expensive than GLM-4.7-Flash ($0.40/1M tokens).

In conclusion, Kimi K2.5 is more expensive than GLM-4.7-Flash.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Fri May 01 2026 • llm-stats.com
Moonshot AI
Kimi K2.5
Input tokens$0.60
Output tokens$3.00
Best providerFireworks
Zhipu AI
GLM-4.7-Flash
Input tokens$0.07
Output tokens$0.40
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

970.0B diff

Kimi K2.5 has 970.0B more parameters than GLM-4.7-Flash, making it 3233.3% larger.

Moonshot AI
Kimi K2.5
1000.0Bparameters
Zhipu AI
GLM-4.7-Flash
30.0Bparameters
1000.0B
Kimi K2.5
30.0B
GLM-4.7-Flash

Context Window

Maximum input and output token capacity

Kimi K2.5 accepts 262,100 input tokens compared to GLM-4.7-Flash's 128,000 tokens. Kimi K2.5 can generate longer responses up to 262,100 tokens, while GLM-4.7-Flash is limited to 16,384 tokens.

Moonshot AI
Kimi K2.5
Input262,100 tokens
Output262,100 tokens
Zhipu AI
GLM-4.7-Flash
Input128,000 tokens
Output16,384 tokens
Fri May 01 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Kimi K2.5 supports multimodal inputs, whereas GLM-4.7-Flash does not.

Kimi K2.5 can handle both text and other forms of data like images, making it suitable for multimodal applications.

Kimi K2.5

Text
Images
Audio
Video

GLM-4.7-Flash

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

Kimi K2.5

MIT

Open weights

GLM-4.7-Flash

MIT

Open weights

Release Timeline

When each model was launched

Kimi K2.5 was released on 2026-01-27, while GLM-4.7-Flash was released on 2026-01-19.

Kimi K2.5 is 0 month newer than GLM-4.7-Flash.

Kimi K2.5

Jan 27, 2026

3 months ago

1w newer
GLM-4.7-Flash

Jan 19, 2026

3 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Kimi K2.5 is available from Fireworks, Moonshot AI. GLM-4.7-Flash is available from ZAI.

Kimi K2.5

fireworks logo
Fireworks
Input Price:Input: $0.60/1MOutput Price:Output: $3.00/1M
moonshot logo
Unknown Organization
Input Price:Input: $0.60/1MOutput Price:Output: $3.00/1M

GLM-4.7-Flash

z logo
Unknown Organization
Input Price:Input: $0.07/1MOutput Price:Output: $0.40/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (262,100 tokens)
Supports multimodal inputs
Higher AIME 2025 score (96.1% vs 91.6%)
Higher BrowseComp score (74.9% vs 42.8%)
Higher GPQA score (87.6% vs 75.2%)
Higher Humanity's Last Exam score (50.2% vs 14.4%)
Higher SWE-Bench Verified score (76.8% vs 59.2%)
Less expensive input tokens
Less expensive output tokens

Detailed Comparison

AI Model Comparison Table
Feature
Moonshot AI
Kimi K2.5
Zhipu AI
GLM-4.7-Flash

FAQ

Common questions about Kimi K2.5 vs GLM-4.7-Flash

Kimi K2.5 significantly outperforms across most benchmarks. Kimi K2.5 is made by Moonshot AI and GLM-4.7-Flash is made by Zhipu AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Kimi K2.5 scores AIME 2025: 96.1%, HMMT 2025: 95.4%, InfoVQAtest: 92.6%, OCRBench: 92.3%, MathVista-Mini: 90.1%. GLM-4.7-Flash scores AIME 2025: 91.6%, Tau-bench: 79.5%, GPQA: 75.2%, SWE-Bench Verified: 59.2%, BrowseComp: 42.8%.
GLM-4.7-Flash is 8.6x cheaper for input tokens. Kimi K2.5 costs $0.60/M input and $3.00/M output via fireworks. GLM-4.7-Flash costs $0.07/M input and $0.40/M output via z.
Kimi K2.5 supports 262K tokens and GLM-4.7-Flash supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (262K vs 128K), input pricing ($0.60 vs $0.07/M), multimodal support (yes vs no). See the full comparison above for benchmark-by-benchmark results.
Kimi K2.5 is developed by Moonshot AI and GLM-4.7-Flash is developed by Zhipu AI.