Model Comparison

DeepSeek-R1-0528 vs Gemini 2.0 Flash-Lite

DeepSeek-R1-0528 significantly outperforms across most benchmarks. Gemini 2.0 Flash-Lite is 7.2x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

DeepSeek-R1-0528 outperforms in 3 benchmarks (GPQA, MMLU-Pro, SimpleQA), while Gemini 2.0 Flash-Lite is better at 0 benchmarks.

DeepSeek-R1-0528 significantly outperforms across most benchmarks.

Fri May 01 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Gemini 2.0 Flash-Lite costs less

For input processing, DeepSeek-R1-0528 ($0.50/1M tokens) is 7.1x more expensive than Gemini 2.0 Flash-Lite ($0.07/1M tokens).

For output processing, DeepSeek-R1-0528 ($2.15/1M tokens) is 7.2x more expensive than Gemini 2.0 Flash-Lite ($0.30/1M tokens).

In conclusion, DeepSeek-R1-0528 is more expensive than Gemini 2.0 Flash-Lite.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Fri May 01 2026 • llm-stats.com
DeepSeek
DeepSeek-R1-0528
Input tokens$0.50
Output tokens$2.15
Best providerDeepinfra
Google
Gemini 2.0 Flash-Lite
Input tokens$0.07
Output tokens$0.30
Best providerGoogle
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Gemini 2.0 Flash-Lite accepts 1,048,576 input tokens compared to DeepSeek-R1-0528's 131,072 tokens. DeepSeek-R1-0528 can generate longer responses up to 131,072 tokens, while Gemini 2.0 Flash-Lite is limited to 8,192 tokens.

DeepSeek
DeepSeek-R1-0528
Input131,072 tokens
Output131,072 tokens
Google
Gemini 2.0 Flash-Lite
Input1,048,576 tokens
Output8,192 tokens
Fri May 01 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Gemini 2.0 Flash-Lite supports multimodal inputs, whereas DeepSeek-R1-0528 does not.

Gemini 2.0 Flash-Lite can handle both text and other forms of data like images, making it suitable for multimodal applications.

DeepSeek-R1-0528

Text
Images
Audio
Video

Gemini 2.0 Flash-Lite

Text
Images
Audio
Video

License

Usage and distribution terms

DeepSeek-R1-0528 is licensed under MIT, while Gemini 2.0 Flash-Lite uses a proprietary license.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-R1-0528

MIT

Open weights

Gemini 2.0 Flash-Lite

Proprietary

Closed source

Release Timeline

When each model was launched

DeepSeek-R1-0528 was released on 2025-05-28, while Gemini 2.0 Flash-Lite was released on 2025-02-05.

DeepSeek-R1-0528 is 4 months newer than Gemini 2.0 Flash-Lite.

DeepSeek-R1-0528

May 28, 2025

11 months ago

3mo newer
Gemini 2.0 Flash-Lite

Feb 5, 2025

1.2 years ago

Knowledge Cutoff

When training data ends

Gemini 2.0 Flash-Lite has a documented knowledge cutoff of 2024-06-01, while DeepSeek-R1-0528's cutoff date is not specified.

We can confirm Gemini 2.0 Flash-Lite's training data extends to 2024-06-01, but cannot make a direct comparison without DeepSeek-R1-0528's cutoff date.

DeepSeek-R1-0528

Gemini 2.0 Flash-Lite

Jun 2024

Provider Availability

DeepSeek-R1-0528 is available from DeepInfra, DeepSeek, Novita. Gemini 2.0 Flash-Lite is available from Google.

DeepSeek-R1-0528

deepinfra logo
Deepinfra
Input Price:Input: $0.50/1MOutput Price:Output: $2.15/1M
deepseek logo
DeepSeek
Input Price:Input: $0.55/1MOutput Price:Output: $2.19/1M
novita logo
Novita
Input Price:Input: $0.70/1MOutput Price:Output: $2.50/1M

Gemini 2.0 Flash-Lite

google logo
Google
Input Price:Input: $0.07/1MOutput Price:Output: $0.30/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Has open weights
Higher GPQA score (81.0% vs 51.5%)
Higher MMLU-Pro score (85.0% vs 71.6%)
Higher SimpleQA score (92.3% vs 21.7%)
Larger context window (1,048,576 tokens)
Supports multimodal inputs
Less expensive input tokens
Less expensive output tokens

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-R1-0528
Google
Gemini 2.0 Flash-Lite

FAQ

Common questions about DeepSeek-R1-0528 vs Gemini 2.0 Flash-Lite

DeepSeek-R1-0528 significantly outperforms across most benchmarks. DeepSeek-R1-0528 is made by DeepSeek and Gemini 2.0 Flash-Lite is made by Google. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-R1-0528 scores MMLU-Redux: 93.4%, SimpleQA: 92.3%, AIME 2024: 91.4%, AIME 2025: 87.5%, MMLU-Pro: 85.0%. Gemini 2.0 Flash-Lite scores MATH: 86.8%, FACTS Grounding: 83.6%, Global-MMLU-Lite: 78.2%, MMLU-Pro: 71.6%, MMMU: 68.0%.
Gemini 2.0 Flash-Lite is 7.1x cheaper for input tokens. DeepSeek-R1-0528 costs $0.50/M input and $2.15/M output via deepinfra. Gemini 2.0 Flash-Lite costs $0.07/M input and $0.30/M output via google.
DeepSeek-R1-0528 supports 131K tokens and Gemini 2.0 Flash-Lite supports 1.0M tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (131K vs 1.0M), input pricing ($0.50 vs $0.07/M), multimodal support (no vs yes), licensing (MIT vs Proprietary). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-R1-0528 is developed by DeepSeek and Gemini 2.0 Flash-Lite is developed by Google.