Ministral 8B Instruct
Overview
The Ministral-8B-Instruct-2410 is an instruct fine-tuned model for local intelligence, on-device computing, and at-the-edge use cases, significantly outperforming existing models of similar size.
Ministral 8B Instruct was released on October 16, 2024. API access is available through Mistral AI.
Performance
Timeline
ReleasedUnknown
Knowledge CutoffUnknown
Specifications
Parameters
8.0B
License
Mistral Research License
Training Data
Unknown
Tags
tuning:instruct
Benchmarks
Ministral 8B Instruct Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Notice missing or incorrect data?Start an Issue discussion→
Pricing
Pricing, performance, and capabilities for Ministral 8B Instruct across different providers:
| Provider | Input ($/M) | Output ($/M) | Max Input | Max Output | Latency (s) | Throughput | Quantization | Input | Output |
|---|---|---|---|---|---|---|---|---|---|
Mistral AI | $0.10 | $0.10 | 128.0K | 128.0K | 0.18 | 0.1 c/s | — | Text Image Audio Video | Text Image Audio Video |
API Access
API Access Coming Soon
API access for Ministral 8B Instruct will be available soon through our gateway.
Recent Posts
Recent Reviews
FAQ
Common questions about Ministral 8B Instruct
Ministral 8B Instruct was released on October 16, 2024.
Ministral 8B Instruct has 8.0 billion parameters.