MiniStral 3 (14B Instruct 2512)
MistralOverview
A balanced model in the Ministral 3 family, Ministral 3 14B is a powerful, efficient tiny language model with vision capabilities. This model is the instruct post-trained version in FP8, fine-tuned for instruction tasks, making it ideal for chat and instruction based use cases. The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 14B can even be deployed locally, capable of fitting in 24GB of VRAM in BF16, and less than 12GB of RAM/VRAM when quantized.
MiniStral 3 (14B Instruct 2512) was released on December 4, 2025.
Performance
Timeline
Other Details
Related Models
Compare MiniStral 3 (14B Instruct 2512) to other models by quality (GPQA score) vs cost. Higher scores and lower costs represent better value.
Performance visualization loading...
Gathering benchmark data from similar models
Benchmarks
MiniStral 3 (14B Instruct 2512) Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Pricing
Pricing, performance, and capabilities for MiniStral 3 (14B Instruct 2512) across different providers:
Example Outputs
Recent Posts
Recent Reviews
API Access
API Access Coming Soon
API access for MiniStral 3 (14B Instruct 2512) will be available soon through our gateway.
FAQ
Common questions about MiniStral 3 (14B Instruct 2512)
