Model Comparison

DeepSeek VL2 vs Gemini 2.0 Flash Thinking

Gemini 2.0 Flash Thinking significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

DeepSeek VL2 outperforms in 0 benchmarks, while Gemini 2.0 Flash Thinking is better at 1 benchmark (MMMU).

Gemini 2.0 Flash Thinking significantly outperforms across most benchmarks.

Thu Apr 30 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Thu Apr 30 2026 • llm-stats.com
DeepSeek
DeepSeek VL2
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Google
Gemini 2.0 Flash Thinking
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only DeepSeek VL2 specifies input context (129,280 tokens). Only DeepSeek VL2 specifies output context (129,280 tokens).

DeepSeek
DeepSeek VL2
Input129,280 tokens
Output129,280 tokens
Google
Gemini 2.0 Flash Thinking
Input- tokens
Output- tokens
Thu Apr 30 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both DeepSeek VL2 and Gemini 2.0 Flash Thinking support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

DeepSeek VL2

Text
Images
Audio
Video

Gemini 2.0 Flash Thinking

Text
Images
Audio
Video

License

Usage and distribution terms

DeepSeek VL2 is licensed under deepseek, while Gemini 2.0 Flash Thinking uses a proprietary license.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek VL2

deepseek

Open weights

Gemini 2.0 Flash Thinking

Proprietary

Closed source

Release Timeline

When each model was launched

DeepSeek VL2 was released on 2024-12-13, while Gemini 2.0 Flash Thinking was released on 2025-01-21.

Gemini 2.0 Flash Thinking is 1 month newer than DeepSeek VL2.

DeepSeek VL2

Dec 13, 2024

1.4 years ago

Gemini 2.0 Flash Thinking

Jan 21, 2025

1.3 years ago

1mo newer

Knowledge Cutoff

When training data ends

Gemini 2.0 Flash Thinking has a documented knowledge cutoff of 2024-08-01, while DeepSeek VL2's cutoff date is not specified.

We can confirm Gemini 2.0 Flash Thinking's training data extends to 2024-08-01, but cannot make a direct comparison without DeepSeek VL2's cutoff date.

DeepSeek VL2

Gemini 2.0 Flash Thinking

Aug 2024

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (129,280 tokens)
Has open weights
Higher MMMU score (75.4% vs 51.1%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek VL2
Google
Gemini 2.0 Flash Thinking

FAQ

Common questions about DeepSeek VL2 vs Gemini 2.0 Flash Thinking

Gemini 2.0 Flash Thinking significantly outperforms across most benchmarks. DeepSeek VL2 is made by DeepSeek and Gemini 2.0 Flash Thinking is made by Google. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek VL2 scores DocVQA: 93.3%, ChartQA: 86.0%, TextVQA: 84.2%, AI2D: 81.4%, OCRBench: 81.1%. Gemini 2.0 Flash Thinking scores MMMU: 75.4%, GPQA: 74.2%, AIME 2024: 73.3%.
DeepSeek VL2 supports 129K tokens and Gemini 2.0 Flash Thinking supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (deepseek vs Proprietary). See the full comparison above for benchmark-by-benchmark results.
DeepSeek VL2 is developed by DeepSeek and Gemini 2.0 Flash Thinking is developed by Google.