Model Comparison

Gemini 2.0 Flash Thinking vs DeepSeek-V3

Gemini 2.0 Flash Thinking significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

Gemini 2.0 Flash Thinking outperforms in 2 benchmarks (AIME 2024, GPQA), while DeepSeek-V3 is better at 0 benchmarks.

Gemini 2.0 Flash Thinking significantly outperforms across most benchmarks.

Thu May 07 2026 • llm-stats.com

Arena Performance

Human preference votes

Context Window

Maximum input and output token capacity

Only DeepSeek-V3 specifies input context (131,072 tokens). Only DeepSeek-V3 specifies output context (131,072 tokens).

Google
Gemini 2.0 Flash Thinking
Input- tokens
Output- tokens
DeepSeek
DeepSeek-V3
Input131,072 tokens
Output131,072 tokens
Thu May 07 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Gemini 2.0 Flash Thinking supports multimodal inputs, whereas DeepSeek-V3 does not.

Gemini 2.0 Flash Thinking can handle both text and other forms of data like images, making it suitable for multimodal applications.

Gemini 2.0 Flash Thinking

Text
Images
Audio
Video

DeepSeek-V3

Text
Images
Audio
Video

License

Usage and distribution terms

Gemini 2.0 Flash Thinking is licensed under a proprietary license, while DeepSeek-V3 uses MIT + Model License (Commercial use allowed).

License differences may affect how you can use these models in commercial or open-source projects.

Gemini 2.0 Flash Thinking

Proprietary

Closed source

DeepSeek-V3

MIT + Model License (Commercial use allowed)

Open weights

Release Timeline

When each model was launched

Gemini 2.0 Flash Thinking was released on 2025-01-21, while DeepSeek-V3 was released on 2024-12-25.

Gemini 2.0 Flash Thinking is 1 month newer than DeepSeek-V3.

Gemini 2.0 Flash Thinking

Jan 21, 2025

1.3 years ago

3w newer
DeepSeek-V3

Dec 25, 2024

1.4 years ago

Knowledge Cutoff

When training data ends

Gemini 2.0 Flash Thinking has a documented knowledge cutoff of 2024-08-01, while DeepSeek-V3's cutoff date is not specified.

We can confirm Gemini 2.0 Flash Thinking's training data extends to 2024-08-01, but cannot make a direct comparison without DeepSeek-V3's cutoff date.

Gemini 2.0 Flash Thinking

Aug 2024

DeepSeek-V3

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Supports multimodal inputs
Higher AIME 2024 score (73.3% vs 39.2%)
Higher GPQA score (74.2% vs 59.1%)
Larger context window (131,072 tokens)
Has open weights

Detailed Comparison

AI Model Comparison Table
Feature
Google
Gemini 2.0 Flash Thinking
DeepSeek
DeepSeek-V3

FAQ

Common questions about Gemini 2.0 Flash Thinking vs DeepSeek-V3.

Which is better, Gemini 2.0 Flash Thinking or DeepSeek-V3?

Gemini 2.0 Flash Thinking significantly outperforms across most benchmarks. Gemini 2.0 Flash Thinking is made by Google and DeepSeek-V3 is made by DeepSeek. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does Gemini 2.0 Flash Thinking compare to DeepSeek-V3 in benchmarks?

Gemini 2.0 Flash Thinking scores MMMU: 75.4%, GPQA: 74.2%, AIME 2024: 73.3%. DeepSeek-V3 scores DROP: 91.6%, CLUEWSC: 90.9%, MATH-500: 90.2%, MMLU-Redux: 89.1%, MMLU: 88.5%.

What are the context window sizes for Gemini 2.0 Flash Thinking and DeepSeek-V3?

Gemini 2.0 Flash Thinking supports an unknown number of tokens and DeepSeek-V3 supports 131K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

What are the main differences between Gemini 2.0 Flash Thinking and DeepSeek-V3?

Key differences include multimodal support (yes vs no), licensing (Proprietary vs MIT + Model License (Commercial use allowed)). See the full comparison above for benchmark-by-benchmark results.

Who makes Gemini 2.0 Flash Thinking and DeepSeek-V3?

Gemini 2.0 Flash Thinking is developed by Google and DeepSeek-V3 is developed by DeepSeek.