Ministral 8B Instruct
MistralOverview
The Ministral-8B-Instruct-2410 is an instruct fine-tuned model for local intelligence, on-device computing, and at-the-edge use cases, significantly outperforming existing models of similar size.
Ministral 8B Instruct was released on October 16, 2024. API access is available through Mistral AI.
Performance
Timeline
Other Details
Related Models
Compare Ministral 8B Instruct to other models by quality (GPQA score) vs cost. Higher scores and lower costs represent better value.
Performance visualization loading...
Gathering benchmark data from similar models
Benchmarks
Ministral 8B Instruct Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Pricing
Pricing, performance, and capabilities for Ministral 8B Instruct across different providers:
| Provider | Input ($/M) | Output ($/M) | Max Input | Max Output | Latency (s) | Throughput | Quantization | Input | Output |
|---|---|---|---|---|---|---|---|---|---|
Mistral AI | $0.10 | $0.10 | 128.0K | 128.0K | 0.18 | 0.1 tok/s | — | Text Image Audio Video | Text Image Audio Video |
Example Outputs
Recent Posts
Recent Reviews
API Access
API Access Coming Soon
API access for Ministral 8B Instruct will be available soon through our gateway.
FAQ
Common questions about Ministral 8B Instruct
