Model Comparison

DeepSeek-R1-0528 vs GPT OSS 20B

DeepSeek-R1-0528 shows notably better performance in the majority of benchmarks. GPT OSS 20B is 10.4x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

DeepSeek-R1-0528 outperforms in 2 benchmarks (GPQA, Humanity's Last Exam), while GPT OSS 20B is better at 1 benchmark (CodeForces).

DeepSeek-R1-0528 shows notably better performance in the majority of benchmarks.

Wed Apr 22 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

GPT OSS 20B costs less

For input processing, DeepSeek-R1-0528 ($0.50/1M tokens) is 10.0x more expensive than GPT OSS 20B ($0.05/1M tokens).

For output processing, DeepSeek-R1-0528 ($2.15/1M tokens) is 10.7x more expensive than GPT OSS 20B ($0.20/1M tokens).

In conclusion, DeepSeek-R1-0528 is more expensive than GPT OSS 20B.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Wed Apr 22 2026 • llm-stats.com
DeepSeek
DeepSeek-R1-0528
Input tokens$0.50
Output tokens$2.15
Best providerDeepinfra
OpenAI
GPT OSS 20B
Input tokens$0.05
Output tokens$0.20
Best providerNovita
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

650.1B diff

DeepSeek-R1-0528 has 650.1B more parameters than GPT OSS 20B, making it 3110.5% larger.

DeepSeek
DeepSeek-R1-0528
671.0Bparameters
OpenAI
GPT OSS 20B
20.9Bparameters
671.0B
DeepSeek-R1-0528
20.9B
GPT OSS 20B

Context Window

Maximum input and output token capacity

Both models have the same input context window of 131,072 tokens. DeepSeek-R1-0528 can generate longer responses up to 131,072 tokens, while GPT OSS 20B is limited to 32,768 tokens.

DeepSeek
DeepSeek-R1-0528
Input131,072 tokens
Output131,072 tokens
OpenAI
GPT OSS 20B
Input131,072 tokens
Output32,768 tokens
Wed Apr 22 2026 • llm-stats.com

License

Usage and distribution terms

DeepSeek-R1-0528 is licensed under MIT, while GPT OSS 20B uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-R1-0528

MIT

Open weights

GPT OSS 20B

Apache 2.0

Open weights

Release Timeline

When each model was launched

DeepSeek-R1-0528 was released on 2025-05-28, while GPT OSS 20B was released on 2025-08-05.

GPT OSS 20B is 2 months newer than DeepSeek-R1-0528.

DeepSeek-R1-0528

May 28, 2025

10 months ago

GPT OSS 20B

Aug 5, 2025

8 months ago

2mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

DeepSeek-R1-0528 is available from DeepInfra, DeepSeek, Novita. GPT OSS 20B is available from Novita, Fireworks, Groq, OpenAI.

DeepSeek-R1-0528

deepinfra logo
Deepinfra
Input Price:Input: $0.50/1MOutput Price:Output: $2.15/1M
deepseek logo
DeepSeek
Input Price:Input: $0.55/1MOutput Price:Output: $2.19/1M
novita logo
Novita
Input Price:Input: $0.70/1MOutput Price:Output: $2.50/1M

GPT OSS 20B

novita logo
Novita
Input Price:Input: $0.05/1MOutput Price:Output: $0.20/1M
fireworks logo
Fireworks
Input Price:Input: $0.10/1MOutput Price:Output: $0.50/1M
groq logo
Groq
Input Price:Input: $0.10/1MOutput Price:Output: $0.50/1M
openai logo
OpenAI
Input Price:Input: $0.10/1MOutput Price:Output: $0.50/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher GPQA score (81.0% vs 71.5%)
Higher Humanity's Last Exam score (17.7% vs 10.9%)
Less expensive input tokens
Less expensive output tokens
Higher CodeForces score (74.3% vs 64.3%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-R1-0528
OpenAI
GPT OSS 20B

FAQ

Common questions about DeepSeek-R1-0528 vs GPT OSS 20B

DeepSeek-R1-0528 shows notably better performance in the majority of benchmarks. DeepSeek-R1-0528 is made by DeepSeek and GPT OSS 20B is made by OpenAI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-R1-0528 scores MMLU-Redux: 93.4%, SimpleQA: 92.3%, AIME 2024: 91.4%, AIME 2025: 87.5%, MMLU-Pro: 85.0%. GPT OSS 20B scores MMLU: 85.3%, CodeForces: 74.3%, GPQA: 71.5%, TAU-bench Retail: 54.8%, HealthBench: 42.5%.
GPT OSS 20B is 10.0x cheaper for input tokens. DeepSeek-R1-0528 costs $0.50/M input and $2.15/M output via deepinfra. GPT OSS 20B costs $0.05/M input and $0.20/M output via novita.
DeepSeek-R1-0528 supports 131K tokens and GPT OSS 20B supports 131K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include input pricing ($0.50 vs $0.05/M), licensing (MIT vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-R1-0528 is developed by DeepSeek and GPT OSS 20B is developed by OpenAI.