Gemini 2.0 Flash
Overview
Overview
Next-generation model featuring superior speed, native tool use, multimodal generation, and a 1M token context window. Supports audio, images, video, and text input with capabilities for structured outputs, function calling, code execution, search, and multimodal operations.
Gemini 2.0 Flash was released on December 1, 2024. API access is available through Google.
Performance
Timeline
ReleasedUnknown
Knowledge CutoffUnknown
Specifications
Parameters
Unknown
License
Proprietary
Training Data
Unknown
Benchmarks
Benchmarks
Gemini 2.0 Flash Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Notice missing or incorrect data?Start an Issue discussion→
Pricing
Pricing
Pricing, performance, and capabilities for Gemini 2.0 Flash across different providers:
| Provider | Input ($/M) | Output ($/M) | Max Input | Max Output | Latency (s) | Throughput | Quantization | Input | Output |
|---|---|---|---|---|---|---|---|---|---|
Google | $0.10 | $0.40 | 1.0M | 8.2K | 0.4 | 183.0 c/s | — | Text Image Audio Video | Text Image Audio Video |
API Access
API Access Coming Soon
API access for Gemini 2.0 Flash will be available soon through our gateway.
Recent Posts
Recent Reviews
FAQ
Common questions about Gemini 2.0 Flash
Gemini 2.0 Flash was released on December 1, 2024 by Google.
Gemini 2.0 Flash was created by Google.
Gemini 2.0 Flash is released under the Proprietary license.
Gemini 2.0 Flash has a knowledge cutoff of August 2024. This means the model was trained on data up to this date and may not have information about events after this time.
Yes, Gemini 2.0 Flash is a multimodal model that can process both text and images as input.