Claude 3 Haiku vs Phi 4 Comparison

Comparing Claude 3 Haiku and Phi 4 across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

6 benchmarks

Claude 3 Haiku outperforms in 1 benchmarks (DROP), while Phi 4 is better at 5 benchmarks (GPQA, HumanEval, MATH, MGSM, MMLU).

Phi 4 significantly outperforms across most benchmarks.

Thu Mar 19 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Thu Mar 19 2026 • llm-stats.com
Anthropic
Claude 3 Haiku
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Microsoft
Phi 4
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Input Capabilities

Supported data types and modalities

Claude 3 Haiku supports multimodal inputs, whereas Phi 4 does not.

Claude 3 Haiku can handle both text and other forms of data like images, making it suitable for multimodal applications.

Claude 3 Haiku

Text
Images
Audio
Video

Phi 4

Text
Images
Audio
Video

License

Usage and distribution terms

Claude 3 Haiku is licensed under a proprietary license, while Phi 4 uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Claude 3 Haiku

Proprietary

Closed source

Phi 4

MIT

Open weights

Release Timeline

When each model was launched

Claude 3 Haiku was released on 2024-03-13, while Phi 4 was released on 2024-12-12.

Phi 4 is 9 months newer than Claude 3 Haiku.

Claude 3 Haiku

Mar 13, 2024

2.0 years ago

Phi 4

Dec 12, 2024

1.3 years ago

9mo newer

Knowledge Cutoff

When training data ends

Phi 4 has a documented knowledge cutoff of 2024-06-01, while Claude 3 Haiku's cutoff date is not specified.

We can confirm Phi 4's training data extends to 2024-06-01, but cannot make a direct comparison without Claude 3 Haiku's cutoff date.

Claude 3 Haiku

Phi 4

Jun 2024

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Supports multimodal inputs
Higher DROP score (78.4% vs 75.5%)
Has open weights
Higher GPQA score (56.1% vs 33.3%)
Higher HumanEval score (82.6% vs 75.9%)
Higher MATH score (80.4% vs 38.9%)
Higher MGSM score (80.6% vs 75.1%)
Higher MMLU score (84.8% vs 75.2%)

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude 3 Haiku
Microsoft
Phi 4