Gemini 1.5 Pro
Overview
Overview
Gemini 1.5 Pro is a mid-size multimodal model optimized for a wide range of reasoning tasks. It can process large amounts of data at once, including 2 hours of video, 19 hours of audio, codebases with 60,000 lines of code, or 2,000 pages of text.
Gemini 1.5 Pro was released on May 1, 2024. API access is available through Google.
Performance
Timeline
ReleasedUnknown
Knowledge CutoffUnknown
Specifications
Parameters
Unknown
License
Proprietary
Training Data
Unknown
Benchmarks
Benchmarks
Gemini 1.5 Pro Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Notice missing or incorrect data?Start an Issue discussion→
Pricing
Pricing
Pricing, performance, and capabilities for Gemini 1.5 Pro across different providers:
| Provider | Input ($/M) | Output ($/M) | Max Input | Max Output | Latency (s) | Throughput | Quantization | Input | Output |
|---|---|---|---|---|---|---|---|---|---|
Google | $2.50 | $10.00 | 2.1M | 8.2K | 0.7 | 85.0 c/s | — | Text Image Audio Video | Text Image Audio Video |
API Access
API Access Coming Soon
API access for Gemini 1.5 Pro will be available soon through our gateway.
Recent Posts
Recent Reviews
FAQ
Common questions about Gemini 1.5 Pro
Gemini 1.5 Pro was released on May 1, 2024 by Google.
Gemini 1.5 Pro was created by Google.
Gemini 1.5 Pro is released under the Proprietary license.
Gemini 1.5 Pro has a knowledge cutoff of November 2023. This means the model was trained on data up to this date and may not have information about events after this time.
Yes, Gemini 1.5 Pro is a multimodal model that can process both text and images as input.