GPT-4.1 mini
Overview
Overview
GPT-4.1 mini provides a balance between intelligence, speed, and cost. It's a significant leap in small model performance, even beating GPT-4o in many benchmarks while reducing latency and cost.
GPT-4.1 mini was released on April 14, 2025. API access is available through OpenAI.
Performance
Timeline
ReleasedUnknown
Knowledge CutoffUnknown
Specifications
Parameters
Unknown
License
Proprietary
Training Data
Unknown
Benchmarks
Benchmarks
GPT-4.1 mini Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Notice missing or incorrect data?Start an Issue discussion→
Pricing
Pricing
Pricing, performance, and capabilities for GPT-4.1 mini across different providers:
| Provider | Input ($/M) | Output ($/M) | Max Input | Max Output | Latency (s) | Throughput | Quantization | Input | Output |
|---|---|---|---|---|---|---|---|---|---|
OpenAI | $0.40 | $1.60 | 1.0M | 32.8K | 5.0 | 150.0 c/s | — | Text Image Audio Video | Text Image Audio Video |
API Access
API Access Coming Soon
API access for GPT-4.1 mini will be available soon through our gateway.
Recent Posts
Recent Reviews
FAQ
Common questions about GPT-4.1 mini
GPT-4.1 mini was released on April 14, 2025 by OpenAI.
GPT-4.1 mini was created by OpenAI.
GPT-4.1 mini is released under the Proprietary license.
GPT-4.1 mini has a knowledge cutoff of May 2024. This means the model was trained on data up to this date and may not have information about events after this time.
Yes, GPT-4.1 mini is a multimodal model that can process both text and images as input.