Model Comparison

Gemini 2.0 Flash Thinking vs Grok-3 Mini

Grok-3 Mini significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

Gemini 2.0 Flash Thinking outperforms in 0 benchmarks, while Grok-3 Mini is better at 2 benchmarks (AIME 2024, GPQA).

Grok-3 Mini significantly outperforms across most benchmarks.

Sat May 16 2026 • llm-stats.com

Arena Performance

Human preference votes

Context Window

Maximum input and output token capacity

Only Grok-3 Mini specifies input context (128,000 tokens). Only Grok-3 Mini specifies output context (8,000 tokens).

Google
Gemini 2.0 Flash Thinking
Input- tokens
Output- tokens
xAI
Grok-3 Mini
Input128,000 tokens
Output8,000 tokens
Sat May 16 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Gemini 2.0 Flash Thinking and Grok-3 Mini support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Gemini 2.0 Flash Thinking

Text
Images
Audio
Video

Grok-3 Mini

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under proprietary licenses.

Both models have usage restrictions defined by their respective organizations.

Gemini 2.0 Flash Thinking

Proprietary

Closed source

Grok-3 Mini

Proprietary

Closed source

Release Timeline

When each model was launched

Gemini 2.0 Flash Thinking was released on 2025-01-21, while Grok-3 Mini was released on 2025-02-17.

Grok-3 Mini is 1 month newer than Gemini 2.0 Flash Thinking.

Gemini 2.0 Flash Thinking

Jan 21, 2025

1.3 years ago

Grok-3 Mini

Feb 17, 2025

1.2 years ago

3w newer

Knowledge Cutoff

When training data ends

Gemini 2.0 Flash Thinking has a knowledge cutoff of 2024-08-01, while Grok-3 Mini has a cutoff of 2024-11-17.

Grok-3 Mini has more recent training data (up to 2024-11-17), making it potentially better informed about events through that date compared to Gemini 2.0 Flash Thinking (2024-08-01).

Gemini 2.0 Flash Thinking

Aug 2024

Grok-3 Mini

Nov 2024

3 mo newer

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

No standout differentiators in the data we have for this pair.

Larger context window (128,000 tokens)
Higher AIME 2024 score (95.8% vs 73.3%)
Higher GPQA score (84.0% vs 74.2%)
GoogleGemini 2.0 Flash Thinking
xAIGrok-3 Mini

Detailed Comparison

AI Model Comparison Table
Feature
Google
Gemini 2.0 Flash Thinking
xAI
Grok-3 Mini

FAQ

Common questions about Gemini 2.0 Flash Thinking vs Grok-3 Mini.

Which is better, Gemini 2.0 Flash Thinking or Grok-3 Mini?

Grok-3 Mini significantly outperforms across most benchmarks. Gemini 2.0 Flash Thinking is made by Google and Grok-3 Mini is made by xAI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does Gemini 2.0 Flash Thinking compare to Grok-3 Mini in benchmarks?

Gemini 2.0 Flash Thinking scores MMMU: 75.4%, GPQA: 74.2%, AIME 2024: 73.3%. Grok-3 Mini scores AIME 2024: 95.8%, AIME 2025: 90.8%, GPQA: 84.0%, LiveCodeBench: 80.4%.

What are the context window sizes for Gemini 2.0 Flash Thinking and Grok-3 Mini?

Gemini 2.0 Flash Thinking supports an unknown number of tokens and Grok-3 Mini supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

Who makes Gemini 2.0 Flash Thinking and Grok-3 Mini?

Gemini 2.0 Flash Thinking is developed by Google and Grok-3 Mini is developed by xAI.