Model Comparison

DeepSeek R1 Distill Qwen 32B vs Kimi K2-Instruct-0905

Both models are evenly matched across the benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

DeepSeek R1 Distill Qwen 32B outperforms in 2 benchmarks (AIME 2024, LiveCodeBench), while Kimi K2-Instruct-0905 is better at 2 benchmarks (GPQA, MATH-500).

Both models are evenly matched across the benchmarks.

Wed Apr 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 15 2026 • llm-stats.com
DeepSeek
DeepSeek R1 Distill Qwen 32B
Input tokens$0.12
Output tokens$0.18
Best providerDeepinfra
Moonshot AI
Kimi K2-Instruct-0905
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

967.2B diff

Kimi K2-Instruct-0905 has 967.2B more parameters than DeepSeek R1 Distill Qwen 32B, making it 2948.8% larger.

DeepSeek
DeepSeek R1 Distill Qwen 32B
32.8Bparameters
Moonshot AI
Kimi K2-Instruct-0905
1000.0Bparameters
32.8B
DeepSeek R1 Distill Qwen 32B
1000.0B
Kimi K2-Instruct-0905

Context Window

Maximum input and output token capacity

Only DeepSeek R1 Distill Qwen 32B specifies input context (128,000 tokens). Only DeepSeek R1 Distill Qwen 32B specifies output context (128,000 tokens).

DeepSeek
DeepSeek R1 Distill Qwen 32B
Input128,000 tokens
Output128,000 tokens
Moonshot AI
Kimi K2-Instruct-0905
Input- tokens
Output- tokens
Wed Apr 15 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

DeepSeek R1 Distill Qwen 32B

MIT

Open weights

Kimi K2-Instruct-0905

MIT

Open weights

Release Timeline

When each model was launched

DeepSeek R1 Distill Qwen 32B was released on 2025-01-20, while Kimi K2-Instruct-0905 was released on 2025-09-05.

Kimi K2-Instruct-0905 is 8 months newer than DeepSeek R1 Distill Qwen 32B.

DeepSeek R1 Distill Qwen 32B

Jan 20, 2025

1.2 years ago

Kimi K2-Instruct-0905

Sep 5, 2025

7 months ago

7mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (128,000 tokens)
Higher AIME 2024 score (83.3% vs 69.6%)
Higher LiveCodeBench score (57.2% vs 53.7%)
Higher GPQA score (75.1% vs 62.1%)
Higher MATH-500 score (97.4% vs 94.3%)

Detailed Comparison

FAQ

Common questions about DeepSeek R1 Distill Qwen 32B vs Kimi K2-Instruct-0905

Both models are evenly matched across the benchmarks. DeepSeek R1 Distill Qwen 32B is made by DeepSeek and Kimi K2-Instruct-0905 is made by Moonshot AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek R1 Distill Qwen 32B scores MATH-500: 94.3%, AIME 2024: 83.3%, GPQA: 62.1%, LiveCodeBench: 57.2%. Kimi K2-Instruct-0905 scores MATH-500: 97.4%, MMLU-Redux: 92.7%, IFEval: 89.8%, AutoLogi: 89.5%, MMLU: 89.5%.
DeepSeek R1 Distill Qwen 32B supports 128K tokens and Kimi K2-Instruct-0905 supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
DeepSeek R1 Distill Qwen 32B is developed by DeepSeek and Kimi K2-Instruct-0905 is developed by Moonshot AI.