Model Comparison

Kimi K2 0905 vs Kimi K2-Instruct-0905

Kimi K2 0905 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

Kimi K2 0905 outperforms in 4 benchmarks (AIME 2024, GPQA, MMLU, MMLU-Pro), while Kimi K2-Instruct-0905 is better at 0 benchmarks.

Kimi K2 0905 significantly outperforms across most benchmarks.

Fri May 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Model Size

Parameter count comparison

0.0M diff

Kimi K2-Instruct-0905 has 0.0B more parameters than Kimi K2 0905, making it 0.0% larger.

Moonshot AI
Kimi K2 0905
1.0Tparameters
Moonshot AI
Kimi K2-Instruct-0905
1.0Tparameters
1000.0B
Kimi K2 0905
1000.0B
Kimi K2-Instruct-0905

Context Window

Maximum input and output token capacity

Only Kimi K2 0905 specifies input context (262,144 tokens). Only Kimi K2 0905 specifies output context (262,144 tokens).

Moonshot AI
Kimi K2 0905
Input262,144 tokens
Output262,144 tokens
Moonshot AI
Kimi K2-Instruct-0905
Input- tokens
Output- tokens
Fri May 15 2026 • llm-stats.com

License

Usage and distribution terms

Kimi K2 0905 is licensed under a proprietary license, while Kimi K2-Instruct-0905 uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Kimi K2 0905

Proprietary

Closed source

Kimi K2-Instruct-0905

MIT

Open weights

Release Timeline

When each model was launched

Both models were released on 2025-09-05.

They likely represent similar generations of model development.

Kimi K2 0905

Sep 5, 2025

8 months ago

Kimi K2-Instruct-0905

Sep 5, 2025

8 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (262,144 tokens)
Higher AIME 2024 score (72.0% vs 69.6%)
Higher GPQA score (75.8% vs 75.1%)
Higher MMLU score (90.2% vs 89.5%)
Higher MMLU-Pro score (82.5% vs 81.1%)
Has open weights

Detailed Comparison

AI Model Comparison Table
Feature
Moonshot AI
Kimi K2 0905
Moonshot AI
Kimi K2-Instruct-0905

FAQ

Common questions about Kimi K2 0905 vs Kimi K2-Instruct-0905.

Which is better, Kimi K2 0905 or Kimi K2-Instruct-0905?

Kimi K2 0905 significantly outperforms across most benchmarks. Kimi K2 0905 is made by Moonshot AI and Kimi K2-Instruct-0905 is made by Moonshot AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does Kimi K2 0905 compare to Kimi K2-Instruct-0905 in benchmarks?

Kimi K2 0905 scores HumanEval: 94.5%, MMLU: 90.2%, MATH: 89.1%, MMLU-Pro: 82.5%, GPQA: 75.8%. Kimi K2-Instruct-0905 scores MATH-500: 97.4%, MMLU-Redux: 92.7%, IFEval: 89.8%, AutoLogi: 89.5%, MMLU: 89.5%.

What are the context window sizes for Kimi K2 0905 and Kimi K2-Instruct-0905?

Kimi K2 0905 supports 262K tokens and Kimi K2-Instruct-0905 supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

What are the main differences between Kimi K2 0905 and Kimi K2-Instruct-0905?

Key differences include licensing (Proprietary vs MIT). See the full comparison above for benchmark-by-benchmark results.