Model Comparison

GPT-3.5 Turbo vs Ministral 8B Instruct

GPT-3.5 Turbo shows notably better performance in the majority of benchmarks. Ministral 8B Instruct is 7.5x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

GPT-3.5 Turbo outperforms in 2 benchmarks (HumanEval, MMLU), while Ministral 8B Instruct is better at 1 benchmark (MATH).

GPT-3.5 Turbo shows notably better performance in the majority of benchmarks.

Mon Apr 13 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Ministral 8B Instruct costs less

For input processing, GPT-3.5 Turbo ($0.50/1M tokens) is 5.0x more expensive than Ministral 8B Instruct ($0.10/1M tokens).

For output processing, GPT-3.5 Turbo ($1.50/1M tokens) is 15.0x more expensive than Ministral 8B Instruct ($0.10/1M tokens).

In conclusion, GPT-3.5 Turbo is more expensive than Ministral 8B Instruct.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Mon Apr 13 2026 • llm-stats.com
OpenAI
GPT-3.5 Turbo
Input tokens$0.50
Output tokens$1.50
Best providerAzure
Mistral AI
Ministral 8B Instruct
Input tokens$0.10
Output tokens$0.10
Best providerMistral
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Ministral 8B Instruct accepts 128,000 input tokens compared to GPT-3.5 Turbo's 16,385 tokens. Ministral 8B Instruct can generate longer responses up to 128,000 tokens, while GPT-3.5 Turbo is limited to 4,096 tokens.

OpenAI
GPT-3.5 Turbo
Input16,385 tokens
Output4,096 tokens
Mistral AI
Ministral 8B Instruct
Input128,000 tokens
Output128,000 tokens
Mon Apr 13 2026 • llm-stats.com

License

Usage and distribution terms

GPT-3.5 Turbo is licensed under a proprietary license, while Ministral 8B Instruct uses Mistral Research License.

License differences may affect how you can use these models in commercial or open-source projects.

GPT-3.5 Turbo

Proprietary

Closed source

Ministral 8B Instruct

Mistral Research License

Open weights

Release Timeline

When each model was launched

GPT-3.5 Turbo was released on 2023-03-21, while Ministral 8B Instruct was released on 2024-10-16.

Ministral 8B Instruct is 19 months newer than GPT-3.5 Turbo.

GPT-3.5 Turbo

Mar 21, 2023

3.1 years ago

Ministral 8B Instruct

Oct 16, 2024

1.5 years ago

1.6yr newer

Knowledge Cutoff

When training data ends

GPT-3.5 Turbo has a documented knowledge cutoff of 2021-09-30, while Ministral 8B Instruct's cutoff date is not specified.

We can confirm GPT-3.5 Turbo's training data extends to 2021-09-30, but cannot make a direct comparison without Ministral 8B Instruct's cutoff date.

GPT-3.5 Turbo

Sep 2021

Ministral 8B Instruct

Provider Availability

GPT-3.5 Turbo is available from Azure, OpenAI. Ministral 8B Instruct is available from Mistral AI.

GPT-3.5 Turbo

azure logo
Azure
Input Price:Input: $0.50/1MOutput Price:Output: $1.50/1M
openai logo
OpenAI
Input Price:Input: $0.50/1MOutput Price:Output: $1.50/1M

Ministral 8B Instruct

mistral logo
Mistral
Input Price:Input: $0.10/1MOutput Price:Output: $0.10/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher HumanEval score (68.0% vs 34.8%)
Higher MMLU score (69.8% vs 65.0%)
Larger context window (128,000 tokens)
Less expensive input tokens
Less expensive output tokens
Has open weights
Higher MATH score (54.5% vs 43.1%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
GPT-3.5 Turbo
Mistral AI
Ministral 8B Instruct

FAQ

Common questions about GPT-3.5 Turbo vs Ministral 8B Instruct

GPT-3.5 Turbo shows notably better performance in the majority of benchmarks. GPT-3.5 Turbo is made by OpenAI and Ministral 8B Instruct is made by Mistral AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
GPT-3.5 Turbo scores DROP: 70.2%, MMLU: 69.8%, HumanEval: 68.0%, MGSM: 56.3%, MATH: 43.1%. Ministral 8B Instruct scores MT-Bench: 83.0%, Winogrande: 75.3%, ARC-C: 71.9%, Arena Hard: 70.9%, MBPP pass@1: 70.0%.
Ministral 8B Instruct is 5.0x cheaper for input tokens. GPT-3.5 Turbo costs $0.50/M input and $1.50/M output via azure. Ministral 8B Instruct costs $0.10/M input and $0.10/M output via mistral.
GPT-3.5 Turbo supports 16K tokens and Ministral 8B Instruct supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (16K vs 128K), input pricing ($0.50 vs $0.10/M), licensing (Proprietary vs Mistral Research License). See the full comparison above for benchmark-by-benchmark results.
GPT-3.5 Turbo is developed by OpenAI and Ministral 8B Instruct is developed by Mistral AI.