DeepSeek-V3.2 (Thinking) vs GPT-5 Codex Comparison

Comparing DeepSeek-V3.2 (Thinking) and GPT-5 Codex across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

DeepSeek-V3.2 (Thinking) outperforms in 0 benchmarks, while GPT-5 Codex is better at 1 benchmark (SWE-Bench Verified).

GPT-5 Codex significantly outperforms across most benchmarks.

Sat Mar 14 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Sat Mar 14 2026 • llm-stats.com
DeepSeek
DeepSeek-V3.2 (Thinking)
Input tokens$0.28
Output tokens$0.42
Best providerDeepSeek
OpenAI
GPT-5 Codex
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only DeepSeek-V3.2 (Thinking) specifies input context (131,072 tokens). Only DeepSeek-V3.2 (Thinking) specifies output context (65,536 tokens).

DeepSeek
DeepSeek-V3.2 (Thinking)
Input131,072 tokens
Output65,536 tokens
OpenAI
GPT-5 Codex
Input- tokens
Output- tokens
Sat Mar 14 2026 • llm-stats.com

License

Usage and distribution terms

DeepSeek-V3.2 (Thinking) is licensed under MIT, while GPT-5 Codex uses a proprietary license.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-V3.2 (Thinking)

MIT

Open weights

GPT-5 Codex

Proprietary

Closed source

Release Timeline

When each model was launched

DeepSeek-V3.2 (Thinking) was released on 2025-12-01, while GPT-5 Codex was released on 2025-09-15.

DeepSeek-V3.2 (Thinking) is 3 months newer than GPT-5 Codex.

DeepSeek-V3.2 (Thinking)

Dec 1, 2025

3 months ago

2mo newer
GPT-5 Codex

Sep 15, 2025

6 months ago

Knowledge Cutoff

When training data ends

GPT-5 Codex has a documented knowledge cutoff of 2024-09-30, while DeepSeek-V3.2 (Thinking)'s cutoff date is not specified.

We can confirm GPT-5 Codex's training data extends to 2024-09-30, but cannot make a direct comparison without DeepSeek-V3.2 (Thinking)'s cutoff date.

DeepSeek-V3.2 (Thinking)

GPT-5 Codex

Sep 2024

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (131,072 tokens)
Has open weights
Higher SWE-Bench Verified score (74.5% vs 73.1%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-V3.2 (Thinking)
OpenAI
GPT-5 Codex