Mistral NeMo Instruct
MistralOverview
A state-of-the-art 12B multilingual model with a 128k context window, designed for global applications and strong in multiple languages.
Mistral NeMo Instruct was released on July 18, 2024. API access is available through Google, Mistral AI.
Performance
Timeline
Other Details
Related Models
Compare Mistral NeMo Instruct to other models by quality (GPQA score) vs cost. Higher scores and lower costs represent better value.
Performance visualization loading...
Gathering benchmark data from similar models
Benchmarks
Mistral NeMo Instruct Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Pricing
Pricing, performance, and capabilities for Mistral NeMo Instruct across different providers:
| Provider | Input ($/M) | Output ($/M) | Max Input | Max Output | Latency (s) | Throughput | Quantization | Input | Output |
|---|---|---|---|---|---|---|---|---|---|
Google | $0.15 | $0.15 | 128.0K | 128.0K | 0.4 | 42.0 tok/s | — | Text Image Audio Video | Text Image Audio Video |
Mistral AI | $0.15 | $0.15 | 128.0K | 128.0K | 0.5 | 0.1 tok/s | — | Text Image Audio Video | Text Image Audio Video |
Price Comparison for Mistral NeMo Instruct
Price per 1M input tokens (USD), lower is better
Throughput Comparison for Mistral NeMo Instruct
Tokens per second, higher is better
Latency Comparison for Mistral NeMo Instruct
Time to first token (s), lower is better
Mistral NeMo Instruct API Providers: Price vs Throughput
Example Outputs
Recent Posts
Recent Reviews
API Access
API Access Coming Soon
API access for Mistral NeMo Instruct will be available soon through our gateway.
FAQ
Common questions about Mistral NeMo Instruct
