Model Comparison

GPT-3.5 Turbo vs Phi-3.5-mini-instruct

GPT-3.5 Turbo significantly outperforms across most benchmarks. Phi-3.5-mini-instruct is 7.5x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

5 benchmarks

GPT-3.5 Turbo outperforms in 4 benchmarks (GPQA, HumanEval, MGSM, MMLU), while Phi-3.5-mini-instruct is better at 1 benchmark (MATH).

GPT-3.5 Turbo significantly outperforms across most benchmarks.

Sat Apr 11 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Phi-3.5-mini-instruct costs less

For input processing, GPT-3.5 Turbo ($0.50/1M tokens) is 5.0x more expensive than Phi-3.5-mini-instruct ($0.10/1M tokens).

For output processing, GPT-3.5 Turbo ($1.50/1M tokens) is 15.0x more expensive than Phi-3.5-mini-instruct ($0.10/1M tokens).

In conclusion, GPT-3.5 Turbo is more expensive than Phi-3.5-mini-instruct.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Sat Apr 11 2026 • llm-stats.com
OpenAI
GPT-3.5 Turbo
Input tokens$0.50
Output tokens$1.50
Best providerAzure
Microsoft
Phi-3.5-mini-instruct
Input tokens$0.10
Output tokens$0.10
Best providerAzure
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Phi-3.5-mini-instruct accepts 128,000 input tokens compared to GPT-3.5 Turbo's 16,385 tokens. Phi-3.5-mini-instruct can generate longer responses up to 128,000 tokens, while GPT-3.5 Turbo is limited to 4,096 tokens.

OpenAI
GPT-3.5 Turbo
Input16,385 tokens
Output4,096 tokens
Microsoft
Phi-3.5-mini-instruct
Input128,000 tokens
Output128,000 tokens
Sat Apr 11 2026 • llm-stats.com

License

Usage and distribution terms

GPT-3.5 Turbo is licensed under a proprietary license, while Phi-3.5-mini-instruct uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

GPT-3.5 Turbo

Proprietary

Closed source

Phi-3.5-mini-instruct

MIT

Open weights

Release Timeline

When each model was launched

GPT-3.5 Turbo was released on 2023-03-21, while Phi-3.5-mini-instruct was released on 2024-08-23.

Phi-3.5-mini-instruct is 17 months newer than GPT-3.5 Turbo.

GPT-3.5 Turbo

Mar 21, 2023

3.1 years ago

Phi-3.5-mini-instruct

Aug 23, 2024

1.6 years ago

1.4yr newer

Knowledge Cutoff

When training data ends

GPT-3.5 Turbo has a documented knowledge cutoff of 2021-09-30, while Phi-3.5-mini-instruct's cutoff date is not specified.

We can confirm GPT-3.5 Turbo's training data extends to 2021-09-30, but cannot make a direct comparison without Phi-3.5-mini-instruct's cutoff date.

GPT-3.5 Turbo

Sep 2021

Phi-3.5-mini-instruct

Provider Availability

GPT-3.5 Turbo is available from Azure, OpenAI. Phi-3.5-mini-instruct is available from Azure.

GPT-3.5 Turbo

azure logo
Azure
Input Price:Input: $0.50/1MOutput Price:Output: $1.50/1M
openai logo
OpenAI
Input Price:Input: $0.50/1MOutput Price:Output: $1.50/1M

Phi-3.5-mini-instruct

azure logo
Azure
Input Price:Input: $0.10/1MOutput Price:Output: $0.10/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher GPQA score (30.8% vs 30.4%)
Higher HumanEval score (68.0% vs 62.8%)
Higher MGSM score (56.3% vs 47.9%)
Higher MMLU score (69.8% vs 69.0%)
Larger context window (128,000 tokens)
Less expensive input tokens
Less expensive output tokens
Has open weights
Higher MATH score (48.5% vs 43.1%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
GPT-3.5 Turbo
Microsoft
Phi-3.5-mini-instruct

FAQ

Common questions about GPT-3.5 Turbo vs Phi-3.5-mini-instruct

GPT-3.5 Turbo significantly outperforms across most benchmarks. GPT-3.5 Turbo is made by OpenAI and Phi-3.5-mini-instruct is made by Microsoft. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
GPT-3.5 Turbo scores DROP: 70.2%, MMLU: 69.8%, HumanEval: 68.0%, MGSM: 56.3%, MATH: 43.1%. Phi-3.5-mini-instruct scores GSM8k: 86.2%, ARC-C: 84.6%, RULER: 84.1%, PIQA: 81.0%, OpenBookQA: 79.2%.
Phi-3.5-mini-instruct is 5.0x cheaper for input tokens. GPT-3.5 Turbo costs $0.50/M input and $1.50/M output via azure. Phi-3.5-mini-instruct costs $0.10/M input and $0.10/M output via azure.
GPT-3.5 Turbo supports 16K tokens and Phi-3.5-mini-instruct supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (16K vs 128K), input pricing ($0.50 vs $0.10/M), licensing (Proprietary vs MIT). See the full comparison above for benchmark-by-benchmark results.
GPT-3.5 Turbo is developed by OpenAI and Phi-3.5-mini-instruct is developed by Microsoft.