Mistral NeMo Instruct
Overview
A state-of-the-art 12B multilingual model with a 128k context window, designed for global applications and strong in multiple languages.
Mistral NeMo Instruct was released on July 18, 2024. API access is available through Google, Mistral AI.
Performance
Timeline
ReleasedUnknown
Knowledge CutoffUnknown
Specifications
Parameters
12.0B
License
Apache 2.0
Training Data
Unknown
Tags
tuning:instruct
Benchmarks
Mistral NeMo Instruct Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Notice missing or incorrect data?Start an Issue discussion→
Pricing
Pricing, performance, and capabilities for Mistral NeMo Instruct across different providers:
| Provider | Input ($/M) | Output ($/M) | Max Input | Max Output | Latency (s) | Throughput | Quantization | Input | Output |
|---|---|---|---|---|---|---|---|---|---|
Google | $0.15 | $0.15 | 128.0K | 128.0K | 0.4 | 42.0 c/s | — | Text Image Audio Video | Text Image Audio Video |
Mistral AI | $0.15 | $0.15 | 128.0K | 128.0K | 0.5 | 0.1 c/s | — | Text Image Audio Video | Text Image Audio Video |
Price Comparison for Mistral NeMo Instruct
Price per 1M input tokens (USD), lower is better
Throughput Comparison for Mistral NeMo Instruct
Tokens per second, higher is better
Latency Comparison for Mistral NeMo Instruct
Time to first token (s), lower is better
API Access
API Access Coming Soon
API access for Mistral NeMo Instruct will be available soon through our gateway.
Recent Posts
Recent Reviews
FAQ
Common questions about Mistral NeMo Instruct
Mistral NeMo Instruct was released on July 18, 2024.
Mistral NeMo Instruct has 12.0 billion parameters.