Model Comparison

Kimi K2.5 vs Qwen3.5-27B

Kimi K2.5 significantly outperforms across most benchmarks. Qwen3.5-27B is 1.5x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

23 benchmarks

Kimi K2.5 outperforms in 20 benchmarks (AA-LCR, BrowseComp, GPQA, HMMT 2025, Humanity's Last Exam, LiveCodeBench v6, LongBench v2, LVBench, MathVista-Mini, MMLU-Pro, MMMU-Pro, MMVU, OCRBench, Seal-0, SimpleVQA, SWE-Bench Verified, Terminal-Bench 2.0, VideoMMMU, WideSearch, ZEROBench), while Qwen3.5-27B is better at 3 benchmarks (CharXiv-R, MathVision, OmniDocBench 1.5).

Kimi K2.5 significantly outperforms across most benchmarks.

Sat Apr 18 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Qwen3.5-27B costs less

For input processing, Kimi K2.5 ($0.60/1M tokens) is 2.0x more expensive than Qwen3.5-27B ($0.30/1M tokens).

For output processing, Kimi K2.5 ($3.00/1M tokens) is 1.3x more expensive than Qwen3.5-27B ($2.40/1M tokens).

In conclusion, Kimi K2.5 is more expensive than Qwen3.5-27B.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Sat Apr 18 2026 • llm-stats.com
Moonshot AI
Kimi K2.5
Input tokens$0.60
Output tokens$3.00
Best providerFireworks
Alibaba Cloud / Qwen Team
Qwen3.5-27B
Input tokens$0.30
Output tokens$2.40
Best providerNovita
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

973.0B diff

Kimi K2.5 has 973.0B more parameters than Qwen3.5-27B, making it 3603.7% larger.

Moonshot AI
Kimi K2.5
1000.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3.5-27B
27.0Bparameters
1000.0B
Kimi K2.5
27.0B
Qwen3.5-27B

Context Window

Maximum input and output token capacity

Qwen3.5-27B accepts 262,144 input tokens compared to Kimi K2.5's 262,100 tokens. Kimi K2.5 can generate longer responses up to 262,100 tokens, while Qwen3.5-27B is limited to 65,536 tokens.

Moonshot AI
Kimi K2.5
Input262,100 tokens
Output262,100 tokens
Alibaba Cloud / Qwen Team
Qwen3.5-27B
Input262,144 tokens
Output65,536 tokens
Sat Apr 18 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Kimi K2.5 and Qwen3.5-27B support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Kimi K2.5

Text
Images
Audio
Video

Qwen3.5-27B

Text
Images
Audio
Video

License

Usage and distribution terms

Kimi K2.5 is licensed under MIT, while Qwen3.5-27B uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Kimi K2.5

MIT

Open weights

Qwen3.5-27B

Apache 2.0

Open weights

Release Timeline

When each model was launched

Kimi K2.5 was released on 2026-01-27, while Qwen3.5-27B was released on 2026-02-24.

Qwen3.5-27B is 1 month newer than Kimi K2.5.

Kimi K2.5

Jan 27, 2026

2 months ago

Qwen3.5-27B

Feb 24, 2026

1 months ago

4w newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Kimi K2.5 is available from Fireworks, Moonshot AI. Qwen3.5-27B is available from Novita.

Kimi K2.5

fireworks logo
Fireworks
Input Price:Input: $0.60/1MOutput Price:Output: $3.00/1M
moonshot logo
Unknown Organization
Input Price:Input: $0.60/1MOutput Price:Output: $3.00/1M

Qwen3.5-27B

novita logo
Novita
Input Price:Input: $0.30/1MOutput Price:Output: $2.40/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher AA-LCR score (70.0% vs 66.1%)
Higher BrowseComp score (74.9% vs 61.0%)
Higher GPQA score (87.6% vs 85.5%)
Higher HMMT 2025 score (95.4% vs 92.0%)
Higher Humanity's Last Exam score (50.2% vs 48.5%)
Higher LiveCodeBench v6 score (85.0% vs 80.7%)
Higher LongBench v2 score (61.0% vs 60.6%)
Higher LVBench score (75.9% vs 73.6%)
Higher MathVista-Mini score (90.1% vs 87.8%)
Higher MMLU-Pro score (87.1% vs 86.1%)
Higher MMMU-Pro score (78.5% vs 75.0%)
Higher MMVU score (80.4% vs 73.3%)
Higher OCRBench score (92.3% vs 89.4%)
Higher Seal-0 score (57.4% vs 47.2%)
Higher SimpleVQA score (71.2% vs 56.0%)
Higher SWE-Bench Verified score (76.8% vs 72.4%)
Higher Terminal-Bench 2.0 score (50.8% vs 41.6%)
Higher VideoMMMU score (86.6% vs 82.3%)
Higher WideSearch score (79.0% vs 61.1%)
Higher ZEROBench score (11.0% vs 10.0%)
Alibaba Cloud / Qwen Team

Qwen3.5-27B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Less expensive input tokens
Less expensive output tokens
Higher CharXiv-R score (79.5% vs 77.5%)
Higher MathVision score (86.0% vs 84.2%)
Higher OmniDocBench 1.5 score (88.9% vs 88.8%)

Detailed Comparison

AI Model Comparison Table
Feature
Moonshot AI
Kimi K2.5
Alibaba Cloud / Qwen Team
Qwen3.5-27B

FAQ

Common questions about Kimi K2.5 vs Qwen3.5-27B

Kimi K2.5 significantly outperforms across most benchmarks. Kimi K2.5 is made by Moonshot AI and Qwen3.5-27B is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Kimi K2.5 scores AIME 2025: 96.1%, HMMT 2025: 95.4%, InfoVQAtest: 92.6%, OCRBench: 92.3%, MathVista-Mini: 90.1%. Qwen3.5-27B scores CountBench: 97.8%, VLMsAreBlind: 96.9%, IFEval: 95.0%, V*: 93.7%, MMLU-Redux: 93.2%.
Qwen3.5-27B is 2.0x cheaper for input tokens. Kimi K2.5 costs $0.60/M input and $3.00/M output via fireworks. Qwen3.5-27B costs $0.30/M input and $2.40/M output via novita.
Kimi K2.5 supports 262K tokens and Qwen3.5-27B supports 262K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (262K vs 262K), input pricing ($0.60 vs $0.30/M), licensing (MIT vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
Kimi K2.5 is developed by Moonshot AI and Qwen3.5-27B is developed by Alibaba Cloud / Qwen Team.