Model Comparison

DeepSeek-V3.2 (Thinking) vs Kimi K2-Thinking-0905

Kimi K2-Thinking-0905 has a slight edge in benchmark performance. DeepSeek-V3.2 (Thinking) is 2.7x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

9 benchmarks

DeepSeek-V3.2 (Thinking) outperforms in 4 benchmarks (BrowseComp-zh, MMLU-Pro, SWE-bench Multilingual, SWE-Bench Verified), while Kimi K2-Thinking-0905 is better at 5 benchmarks (AIME 2025, BrowseComp, GPQA, HMMT 2025, Humanity's Last Exam).

Kimi K2-Thinking-0905 has a slight edge in benchmark performance.

Wed May 13 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

DeepSeek-V3.2 (Thinking) costs less

For input processing, DeepSeek-V3.2 (Thinking) ($0.28/1M tokens) is 1.7x cheaper than Kimi K2-Thinking-0905 ($0.47/1M tokens).

For output processing, DeepSeek-V3.2 (Thinking) ($0.42/1M tokens) is 4.8x cheaper than Kimi K2-Thinking-0905 ($2.00/1M tokens).

In conclusion, Kimi K2-Thinking-0905 is more expensive than DeepSeek-V3.2 (Thinking).*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Wed May 13 2026 • llm-stats.com
DeepSeek
DeepSeek-V3.2 (Thinking)
Input tokens$0.28
Output tokens$0.42
Best providerDeepSeek
Moonshot AI
Kimi K2-Thinking-0905
Input tokens$0.47
Output tokens$2.00
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

315.0B diff

Kimi K2-Thinking-0905 has 315.0B more parameters than DeepSeek-V3.2 (Thinking), making it 46.0% larger.

DeepSeek
DeepSeek-V3.2 (Thinking)
685.0Bparameters
Moonshot AI
Kimi K2-Thinking-0905
1.0Tparameters
685.0B
DeepSeek-V3.2 (Thinking)
1000.0B
Kimi K2-Thinking-0905

Context Window

Maximum input and output token capacity

Kimi K2-Thinking-0905 accepts 262,144 input tokens compared to DeepSeek-V3.2 (Thinking)'s 131,072 tokens. Kimi K2-Thinking-0905 can generate longer responses up to 262,144 tokens, while DeepSeek-V3.2 (Thinking) is limited to 65,536 tokens.

DeepSeek
DeepSeek-V3.2 (Thinking)
Input131,072 tokens
Output65,536 tokens
Moonshot AI
Kimi K2-Thinking-0905
Input262,144 tokens
Output262,144 tokens
Wed May 13 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

DeepSeek-V3.2 (Thinking)

MIT

Open weights

Kimi K2-Thinking-0905

MIT

Open weights

Release Timeline

When each model was launched

DeepSeek-V3.2 (Thinking) was released on 2025-12-01, while Kimi K2-Thinking-0905 was released on 2025-09-05.

DeepSeek-V3.2 (Thinking) is 3 months newer than Kimi K2-Thinking-0905.

DeepSeek-V3.2 (Thinking)

Dec 1, 2025

5 months ago

2mo newer
Kimi K2-Thinking-0905

Sep 5, 2025

8 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

DeepSeek-V3.2 (Thinking) is available from DeepSeek. Kimi K2-Thinking-0905 is available from DeepInfra, Novita, Fireworks.

DeepSeek-V3.2 (Thinking)

deepseek logo
DeepSeek
Input Price:Input: $0.28/1MOutput Price:Output: $0.42/1M

Kimi K2-Thinking-0905

deepinfra logo
Deepinfra
Input Price:Input: $0.47/1MOutput Price:Output: $2.00/1M
novita logo
Novita
Input Price:Input: $0.48/1MOutput Price:Output: $2.00/1M
fireworks logo
Fireworks
Input Price:Input: $0.60/1MOutput Price:Output: $2.50/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Less expensive input tokens
Less expensive output tokens
Higher BrowseComp-zh score (65.0% vs 62.3%)
Higher MMLU-Pro score (85.0% vs 84.6%)
Higher SWE-bench Multilingual score (70.2% vs 61.1%)
Higher SWE-Bench Verified score (73.1% vs 71.3%)
Larger context window (262,144 tokens)
Higher AIME 2025 score (100.0% vs 93.1%)
Higher BrowseComp score (60.2% vs 51.4%)
Higher GPQA score (84.5% vs 82.4%)
Higher HMMT 2025 score (97.5% vs 90.2%)
Higher Humanity's Last Exam score (51.0% vs 25.1%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-V3.2 (Thinking)
Moonshot AI
Kimi K2-Thinking-0905

FAQ

Common questions about DeepSeek-V3.2 (Thinking) vs Kimi K2-Thinking-0905.

Which is better, DeepSeek-V3.2 (Thinking) or Kimi K2-Thinking-0905?

Kimi K2-Thinking-0905 has a slight edge in benchmark performance. DeepSeek-V3.2 (Thinking) is made by DeepSeek and Kimi K2-Thinking-0905 is made by Moonshot AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does DeepSeek-V3.2 (Thinking) compare to Kimi K2-Thinking-0905 in benchmarks?

DeepSeek-V3.2 (Thinking) scores AIME 2025: 93.1%, HMMT 2025: 90.2%, MMLU-Pro: 85.0%, LiveCodeBench: 83.3%, GPQA: 82.4%. Kimi K2-Thinking-0905 scores AIME 2025: 100.0%, HMMT 2025: 97.5%, MMLU-Redux: 94.4%, FRAMES: 87.0%, MMLU-Pro: 84.6%.

Is DeepSeek-V3.2 (Thinking) cheaper than Kimi K2-Thinking-0905?

DeepSeek-V3.2 (Thinking) is 1.7x cheaper for input tokens. DeepSeek-V3.2 (Thinking) costs $0.28/M input and $0.42/M output via deepseek. Kimi K2-Thinking-0905 costs $0.47/M input and $2.00/M output via deepinfra.

What are the context window sizes for DeepSeek-V3.2 (Thinking) and Kimi K2-Thinking-0905?

DeepSeek-V3.2 (Thinking) supports 131K tokens and Kimi K2-Thinking-0905 supports 262K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

What are the main differences between DeepSeek-V3.2 (Thinking) and Kimi K2-Thinking-0905?

Key differences include context window (131K vs 262K), input pricing ($0.28 vs $0.47/M). See the full comparison above for benchmark-by-benchmark results.

Who makes DeepSeek-V3.2 (Thinking) and Kimi K2-Thinking-0905?

DeepSeek-V3.2 (Thinking) is developed by DeepSeek and Kimi K2-Thinking-0905 is developed by Moonshot AI.