Model Comparison

DeepSeek-V3.2-Exp vs Gemini 2.0 Flash

DeepSeek-V3.2-Exp significantly outperforms across most benchmarks. Gemini 2.0 Flash is 1.7x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

DeepSeek-V3.2-Exp outperforms in 3 benchmarks (GPQA, LiveCodeBench, MMLU-Pro), while Gemini 2.0 Flash is better at 0 benchmarks.

DeepSeek-V3.2-Exp significantly outperforms across most benchmarks.

Sun Apr 19 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Gemini 2.0 Flash costs less

For input processing, DeepSeek-V3.2-Exp ($0.27/1M tokens) is 2.7x more expensive than Gemini 2.0 Flash ($0.10/1M tokens).

For output processing, DeepSeek-V3.2-Exp ($0.41/1M tokens) is 1.0x more expensive than Gemini 2.0 Flash ($0.40/1M tokens).

In conclusion, DeepSeek-V3.2-Exp is more expensive than Gemini 2.0 Flash.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Sun Apr 19 2026 • llm-stats.com
DeepSeek
DeepSeek-V3.2-Exp
Input tokens$0.27
Output tokens$0.41
Best providerNovita
Google
Gemini 2.0 Flash
Input tokens$0.10
Output tokens$0.40
Best providerGoogle
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Gemini 2.0 Flash accepts 1,048,576 input tokens compared to DeepSeek-V3.2-Exp's 163,840 tokens. DeepSeek-V3.2-Exp can generate longer responses up to 65,536 tokens, while Gemini 2.0 Flash is limited to 8,192 tokens.

DeepSeek
DeepSeek-V3.2-Exp
Input163,840 tokens
Output65,536 tokens
Google
Gemini 2.0 Flash
Input1,048,576 tokens
Output8,192 tokens
Sun Apr 19 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Gemini 2.0 Flash supports multimodal inputs, whereas DeepSeek-V3.2-Exp does not.

Gemini 2.0 Flash can handle both text and other forms of data like images, making it suitable for multimodal applications.

DeepSeek-V3.2-Exp

Text
Images
Audio
Video

Gemini 2.0 Flash

Text
Images
Audio
Video

License

Usage and distribution terms

DeepSeek-V3.2-Exp is licensed under MIT, while Gemini 2.0 Flash uses a proprietary license.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-V3.2-Exp

MIT

Open weights

Gemini 2.0 Flash

Proprietary

Closed source

Release Timeline

When each model was launched

DeepSeek-V3.2-Exp was released on 2025-09-29, while Gemini 2.0 Flash was released on 2024-12-01.

DeepSeek-V3.2-Exp is 10 months newer than Gemini 2.0 Flash.

DeepSeek-V3.2-Exp

Sep 29, 2025

6 months ago

10mo newer
Gemini 2.0 Flash

Dec 1, 2024

1.4 years ago

Knowledge Cutoff

When training data ends

Gemini 2.0 Flash has a documented knowledge cutoff of 2024-08-01, while DeepSeek-V3.2-Exp's cutoff date is not specified.

We can confirm Gemini 2.0 Flash's training data extends to 2024-08-01, but cannot make a direct comparison without DeepSeek-V3.2-Exp's cutoff date.

DeepSeek-V3.2-Exp

Gemini 2.0 Flash

Aug 2024

Provider Availability

DeepSeek-V3.2-Exp is available from Novita. Gemini 2.0 Flash is available from Google.

DeepSeek-V3.2-Exp

novita logo
Novita
Input Price:Input: $0.27/1MOutput Price:Output: $0.41/1M

Gemini 2.0 Flash

google logo
Google
Input Price:Input: $0.10/1MOutput Price:Output: $0.40/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Has open weights
Higher GPQA score (79.9% vs 62.1%)
Higher LiveCodeBench score (74.1% vs 35.1%)
Higher MMLU-Pro score (85.0% vs 76.4%)
Larger context window (1,048,576 tokens)
Supports multimodal inputs
Less expensive input tokens
Less expensive output tokens
DeepSeekDeepSeek-V3.2-Exp
GoogleGemini 2.0 Flash

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-V3.2-Exp
Google
Gemini 2.0 Flash

FAQ

Common questions about DeepSeek-V3.2-Exp vs Gemini 2.0 Flash

DeepSeek-V3.2-Exp significantly outperforms across most benchmarks. DeepSeek-V3.2-Exp is made by DeepSeek and Gemini 2.0 Flash is made by Google. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-V3.2-Exp scores SimpleQA: 97.1%, AIME 2025: 89.3%, MMLU-Pro: 85.0%, HMMT 2025: 83.6%, GPQA: 79.9%. Gemini 2.0 Flash scores Natural2Code: 92.9%, MATH: 89.7%, FACTS Grounding: 83.6%, MMLU-Pro: 76.4%, EgoSchema: 71.5%.
Gemini 2.0 Flash is 2.7x cheaper for input tokens. DeepSeek-V3.2-Exp costs $0.27/M input and $0.41/M output via novita. Gemini 2.0 Flash costs $0.10/M input and $0.40/M output via google.
DeepSeek-V3.2-Exp supports 164K tokens and Gemini 2.0 Flash supports 1.0M tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (164K vs 1.0M), input pricing ($0.27 vs $0.10/M), multimodal support (no vs yes), licensing (MIT vs Proprietary). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-V3.2-Exp is developed by DeepSeek and Gemini 2.0 Flash is developed by Google.