Model Comparison

ERNIE 4.5 vs Phi-3.5-mini-instruct

Phi-3.5-mini-instruct significantly outperforms across most benchmarks. Phi-3.5-mini-instruct is 13.0x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

9 benchmarks

ERNIE 4.5 outperforms in 1 benchmarks (GPQA), while Phi-3.5-mini-instruct is better at 8 benchmarks (ARC-C, GSM8k, HellaSwag, MATH, MMLU, MMLU-Pro, PIQA, Winogrande).

Phi-3.5-mini-instruct significantly outperforms across most benchmarks.

Wed Apr 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Phi-3.5-mini-instruct costs less

For input processing, ERNIE 4.5 ($0.40/1M tokens) is 4.0x more expensive than Phi-3.5-mini-instruct ($0.10/1M tokens).

For output processing, ERNIE 4.5 ($4.00/1M tokens) is 40.0x more expensive than Phi-3.5-mini-instruct ($0.10/1M tokens).

In conclusion, ERNIE 4.5 is more expensive than Phi-3.5-mini-instruct.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Wed Apr 15 2026 • llm-stats.com
Baidu
ERNIE 4.5
Input tokens$0.40
Output tokens$4.00
Best providerNovita
Microsoft
Phi-3.5-mini-instruct
Input tokens$0.10
Output tokens$0.10
Best providerAzure
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

17.2B diff

ERNIE 4.5 has 17.2B more parameters than Phi-3.5-mini-instruct, making it 452.6% larger.

Baidu
ERNIE 4.5
21.0Bparameters
Microsoft
Phi-3.5-mini-instruct
3.8Bparameters
21.0B
ERNIE 4.5
3.8B
Phi-3.5-mini-instruct

Context Window

Maximum input and output token capacity

Both models have the same input context window of 128,000 tokens. Phi-3.5-mini-instruct can generate longer responses up to 128,000 tokens, while ERNIE 4.5 is limited to 65,536 tokens.

Baidu
ERNIE 4.5
Input128,000 tokens
Output65,536 tokens
Microsoft
Phi-3.5-mini-instruct
Input128,000 tokens
Output128,000 tokens
Wed Apr 15 2026 • llm-stats.com

License

Usage and distribution terms

ERNIE 4.5 is licensed under a proprietary license, while Phi-3.5-mini-instruct uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

ERNIE 4.5

Proprietary

Closed source

Phi-3.5-mini-instruct

MIT

Open weights

Release Timeline

When each model was launched

ERNIE 4.5 was released on 2025-06-25, while Phi-3.5-mini-instruct was released on 2024-08-23.

ERNIE 4.5 is 10 months newer than Phi-3.5-mini-instruct.

ERNIE 4.5

Jun 25, 2025

9 months ago

10mo newer
Phi-3.5-mini-instruct

Aug 23, 2024

1.6 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

ERNIE 4.5 is available from Novita. Phi-3.5-mini-instruct is available from Azure.

ERNIE 4.5

novita logo
Novita
Input Price:Input: $0.40/1MOutput Price:Output: $4.00/1M

Phi-3.5-mini-instruct

azure logo
Azure
Input Price:Input: $0.10/1MOutput Price:Output: $0.10/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher GPQA score (74.0% vs 30.4%)
Less expensive input tokens
Less expensive output tokens
Has open weights
Higher ARC-C score (84.6% vs 40.6%)
Higher GSM8k score (86.2% vs 25.2%)
Higher HellaSwag score (69.4% vs 33.0%)
Higher MATH score (48.5% vs 12.4%)
Higher MMLU score (69.0% vs 41.9%)
Higher MMLU-Pro score (47.4% vs 16.0%)
Higher PIQA score (81.0% vs 55.2%)
Higher Winogrande score (68.5% vs 51.3%)

Detailed Comparison

AI Model Comparison Table
Feature
Baidu
ERNIE 4.5
Microsoft
Phi-3.5-mini-instruct

FAQ

Common questions about ERNIE 4.5 vs Phi-3.5-mini-instruct

Phi-3.5-mini-instruct significantly outperforms across most benchmarks. ERNIE 4.5 is made by Baidu and Phi-3.5-mini-instruct is made by Microsoft. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
ERNIE 4.5 scores GPQA: 74.0%, ARC-E: 60.7%, PIQA: 55.2%, Winogrande: 51.3%, CLUEWSC: 48.6%. Phi-3.5-mini-instruct scores GSM8k: 86.2%, ARC-C: 84.6%, RULER: 84.1%, PIQA: 81.0%, OpenBookQA: 79.2%.
Phi-3.5-mini-instruct is 4.0x cheaper for input tokens. ERNIE 4.5 costs $0.40/M input and $4.00/M output via novita. Phi-3.5-mini-instruct costs $0.10/M input and $0.10/M output via azure.
ERNIE 4.5 supports 128K tokens and Phi-3.5-mini-instruct supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include input pricing ($0.40 vs $0.10/M), licensing (Proprietary vs MIT). See the full comparison above for benchmark-by-benchmark results.
ERNIE 4.5 is developed by Baidu and Phi-3.5-mini-instruct is developed by Microsoft.