Model Comparison

Kimi K2-Instruct-0905 vs DeepSeek-R1-0528

DeepSeek-R1-0528 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

13 benchmarks

Kimi K2-Instruct-0905 outperforms in 3 benchmarks (SWE-bench Multilingual, SWE-Bench Verified, Terminal-Bench), while DeepSeek-R1-0528 is better at 10 benchmarks (Aider-Polyglot, AIME 2024, AIME 2025, GPQA, HMMT 2025, Humanity's Last Exam, LiveCodeBench, MMLU-Pro, MMLU-Redux, SimpleQA).

DeepSeek-R1-0528 significantly outperforms across most benchmarks.

Wed Apr 01 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 01 2026 • llm-stats.com
Moonshot AI
Kimi K2-Instruct-0905
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
DeepSeek
DeepSeek-R1-0528
Input tokens$0.50
Output tokens$2.15
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

329.0B diff

Kimi K2-Instruct-0905 has 329.0B more parameters than DeepSeek-R1-0528, making it 49.0% larger.

Moonshot AI
Kimi K2-Instruct-0905
1000.0Bparameters
DeepSeek
DeepSeek-R1-0528
671.0Bparameters
1000.0B
Kimi K2-Instruct-0905
671.0B
DeepSeek-R1-0528

Context Window

Maximum input and output token capacity

Only DeepSeek-R1-0528 specifies input context (131,072 tokens). Only DeepSeek-R1-0528 specifies output context (131,072 tokens).

Moonshot AI
Kimi K2-Instruct-0905
Input- tokens
Output- tokens
DeepSeek
DeepSeek-R1-0528
Input131,072 tokens
Output131,072 tokens
Wed Apr 01 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

Kimi K2-Instruct-0905

MIT

Open weights

DeepSeek-R1-0528

MIT

Open weights

Release Timeline

When each model was launched

Kimi K2-Instruct-0905 was released on 2025-09-05, while DeepSeek-R1-0528 was released on 2025-05-28.

Kimi K2-Instruct-0905 is 3 months newer than DeepSeek-R1-0528.

Kimi K2-Instruct-0905

Sep 5, 2025

6 months ago

3mo newer
DeepSeek-R1-0528

May 28, 2025

10 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher SWE-bench Multilingual score (47.3% vs 30.5%)
Higher SWE-Bench Verified score (65.8% vs 44.6%)
Higher Terminal-Bench score (25.0% vs 5.7%)
Larger context window (131,072 tokens)
Higher Aider-Polyglot score (71.6% vs 60.0%)
Higher AIME 2024 score (91.4% vs 69.6%)
Higher AIME 2025 score (87.5% vs 49.5%)
Higher GPQA score (81.0% vs 75.1%)
Higher HMMT 2025 score (79.4% vs 38.8%)
Higher Humanity's Last Exam score (17.7% vs 4.7%)
Higher LiveCodeBench score (73.3% vs 53.7%)
Higher MMLU-Pro score (85.0% vs 81.1%)
Higher MMLU-Redux score (93.4% vs 92.7%)
Higher SimpleQA score (92.3% vs 31.0%)

Detailed Comparison

AI Model Comparison Table
Feature
Moonshot AI
Kimi K2-Instruct-0905
DeepSeek
DeepSeek-R1-0528

FAQ

Common questions about Kimi K2-Instruct-0905 vs DeepSeek-R1-0528

DeepSeek-R1-0528 significantly outperforms across most benchmarks. Kimi K2-Instruct-0905 is made by Moonshot AI and DeepSeek-R1-0528 is made by DeepSeek. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Kimi K2-Instruct-0905 scores MATH-500: 97.4%, MMLU-Redux: 92.7%, IFEval: 89.8%, AutoLogi: 89.5%, MMLU: 89.5%. DeepSeek-R1-0528 scores MMLU-Redux: 93.4%, SimpleQA: 92.3%, AIME 2024: 91.4%, AIME 2025: 87.5%, MMLU-Pro: 85.0%.
Kimi K2-Instruct-0905 supports an unknown number of tokens and DeepSeek-R1-0528 supports 131K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Kimi K2-Instruct-0905 is developed by Moonshot AI and DeepSeek-R1-0528 is developed by DeepSeek.