Model Comparison

Kimi K2.5 vs Qwen3.5-35B-A3B

Kimi K2.5 significantly outperforms across most benchmarks. Qwen3.5-35B-A3B is 1.7x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

23 benchmarks

Kimi K2.5 outperforms in 21 benchmarks (AA-LCR, BrowseComp, GPQA, HMMT 2025, Humanity's Last Exam, LiveCodeBench v6, LongBench v2, LVBench, MathVision, MathVista-Mini, MMLU-Pro, MMMU-Pro, MMVU, OCRBench, Seal-0, SimpleVQA, SWE-Bench Verified, Terminal-Bench 2.0, VideoMMMU, WideSearch, ZEROBench), while Qwen3.5-35B-A3B is better at 1 benchmark (OmniDocBench 1.5).

Kimi K2.5 significantly outperforms across most benchmarks.

Fri May 01 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Qwen3.5-35B-A3B costs less

For input processing, Kimi K2.5 ($0.60/1M tokens) is 2.4x more expensive than Qwen3.5-35B-A3B ($0.25/1M tokens).

For output processing, Kimi K2.5 ($3.00/1M tokens) is 1.5x more expensive than Qwen3.5-35B-A3B ($2.00/1M tokens).

In conclusion, Kimi K2.5 is more expensive than Qwen3.5-35B-A3B.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Fri May 01 2026 • llm-stats.com
Moonshot AI
Kimi K2.5
Input tokens$0.60
Output tokens$3.00
Best providerFireworks
Alibaba Cloud / Qwen Team
Qwen3.5-35B-A3B
Input tokens$0.25
Output tokens$2.00
Best providerNovita
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

965.0B diff

Kimi K2.5 has 965.0B more parameters than Qwen3.5-35B-A3B, making it 2757.1% larger.

Moonshot AI
Kimi K2.5
1000.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3.5-35B-A3B
35.0Bparameters
1000.0B
Kimi K2.5
35.0B
Qwen3.5-35B-A3B

Context Window

Maximum input and output token capacity

Qwen3.5-35B-A3B accepts 262,144 input tokens compared to Kimi K2.5's 262,100 tokens. Kimi K2.5 can generate longer responses up to 262,100 tokens, while Qwen3.5-35B-A3B is limited to 65,000 tokens.

Moonshot AI
Kimi K2.5
Input262,100 tokens
Output262,100 tokens
Alibaba Cloud / Qwen Team
Qwen3.5-35B-A3B
Input262,144 tokens
Output65,000 tokens
Fri May 01 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Kimi K2.5 and Qwen3.5-35B-A3B support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Kimi K2.5

Text
Images
Audio
Video

Qwen3.5-35B-A3B

Text
Images
Audio
Video

License

Usage and distribution terms

Kimi K2.5 is licensed under MIT, while Qwen3.5-35B-A3B uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Kimi K2.5

MIT

Open weights

Qwen3.5-35B-A3B

Apache 2.0

Open weights

Release Timeline

When each model was launched

Kimi K2.5 was released on 2026-01-27, while Qwen3.5-35B-A3B was released on 2026-02-24.

Qwen3.5-35B-A3B is 1 month newer than Kimi K2.5.

Kimi K2.5

Jan 27, 2026

3 months ago

Qwen3.5-35B-A3B

Feb 24, 2026

2 months ago

4w newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Kimi K2.5 is available from Fireworks, Moonshot AI. Qwen3.5-35B-A3B is available from Novita.

Kimi K2.5

fireworks logo
Fireworks
Input Price:Input: $0.60/1MOutput Price:Output: $3.00/1M
moonshot logo
Unknown Organization
Input Price:Input: $0.60/1MOutput Price:Output: $3.00/1M

Qwen3.5-35B-A3B

novita logo
Novita
Input Price:Input: $0.25/1MOutput Price:Output: $2.00/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher AA-LCR score (70.0% vs 58.5%)
Higher BrowseComp score (74.9% vs 61.0%)
Higher GPQA score (87.6% vs 84.2%)
Higher HMMT 2025 score (95.4% vs 89.0%)
Higher Humanity's Last Exam score (50.2% vs 47.4%)
Higher LiveCodeBench v6 score (85.0% vs 74.6%)
Higher LongBench v2 score (61.0% vs 59.0%)
Higher LVBench score (75.9% vs 71.4%)
Higher MathVision score (84.2% vs 83.9%)
Higher MathVista-Mini score (90.1% vs 86.2%)
Higher MMLU-Pro score (87.1% vs 85.3%)
Higher MMMU-Pro score (78.5% vs 75.1%)
Higher MMVU score (80.4% vs 72.3%)
Higher OCRBench score (92.3% vs 91.0%)
Higher Seal-0 score (57.4% vs 41.4%)
Higher SimpleVQA score (71.2% vs 58.3%)
Higher SWE-Bench Verified score (76.8% vs 69.2%)
Higher Terminal-Bench 2.0 score (50.8% vs 40.5%)
Higher VideoMMMU score (86.6% vs 80.4%)
Higher WideSearch score (79.0% vs 57.1%)
Higher ZEROBench score (11.0% vs 8.0%)
Alibaba Cloud / Qwen Team

Qwen3.5-35B-A3B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Less expensive input tokens
Less expensive output tokens
Higher OmniDocBench 1.5 score (89.3% vs 88.8%)

Detailed Comparison

AI Model Comparison Table
Feature
Moonshot AI
Kimi K2.5
Alibaba Cloud / Qwen Team
Qwen3.5-35B-A3B

FAQ

Common questions about Kimi K2.5 vs Qwen3.5-35B-A3B

Kimi K2.5 significantly outperforms across most benchmarks. Kimi K2.5 is made by Moonshot AI and Qwen3.5-35B-A3B is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Kimi K2.5 scores AIME 2025: 96.1%, HMMT 2025: 95.4%, InfoVQAtest: 92.6%, OCRBench: 92.3%, MathVista-Mini: 90.1%. Qwen3.5-35B-A3B scores CountBench: 97.8%, VLMsAreBlind: 97.0%, MMLU-Redux: 93.3%, V*: 92.7%, AI2D: 92.6%.
Qwen3.5-35B-A3B is 2.4x cheaper for input tokens. Kimi K2.5 costs $0.60/M input and $3.00/M output via fireworks. Qwen3.5-35B-A3B costs $0.25/M input and $2.00/M output via novita.
Kimi K2.5 supports 262K tokens and Qwen3.5-35B-A3B supports 262K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (262K vs 262K), input pricing ($0.60 vs $0.25/M), licensing (MIT vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
Kimi K2.5 is developed by Moonshot AI and Qwen3.5-35B-A3B is developed by Alibaba Cloud / Qwen Team.