Model Comparison

DeepSeek R1 Zero vs Phi 4 Reasoning

DeepSeek R1 Zero shows notably better performance in the majority of benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

DeepSeek R1 Zero outperforms in 2 benchmarks (AIME 2024, GPQA), while Phi 4 Reasoning is better at 1 benchmark (LiveCodeBench).

DeepSeek R1 Zero shows notably better performance in the majority of benchmarks.

Tue Apr 21 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Tue Apr 21 2026 • llm-stats.com
DeepSeek
DeepSeek R1 Zero
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Microsoft
Phi 4 Reasoning
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

657.0B diff

DeepSeek R1 Zero has 657.0B more parameters than Phi 4 Reasoning, making it 4692.9% larger.

DeepSeek
DeepSeek R1 Zero
671.0Bparameters
Microsoft
Phi 4 Reasoning
14.0Bparameters
671.0B
DeepSeek R1 Zero
14.0B
Phi 4 Reasoning

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

DeepSeek R1 Zero

MIT

Open weights

Phi 4 Reasoning

MIT

Open weights

Release Timeline

When each model was launched

DeepSeek R1 Zero was released on 2025-01-20, while Phi 4 Reasoning was released on 2025-04-30.

Phi 4 Reasoning is 3 months newer than DeepSeek R1 Zero.

DeepSeek R1 Zero

Jan 20, 2025

1.2 years ago

Phi 4 Reasoning

Apr 30, 2025

11 months ago

3mo newer

Knowledge Cutoff

When training data ends

Phi 4 Reasoning has a documented knowledge cutoff of 2025-03-01, while DeepSeek R1 Zero's cutoff date is not specified.

We can confirm Phi 4 Reasoning's training data extends to 2025-03-01, but cannot make a direct comparison without DeepSeek R1 Zero's cutoff date.

DeepSeek R1 Zero

Phi 4 Reasoning

Mar 2025

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher AIME 2024 score (86.7% vs 75.3%)
Higher GPQA score (73.3% vs 65.8%)
Higher LiveCodeBench score (53.8% vs 50.0%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek R1 Zero
Microsoft
Phi 4 Reasoning

FAQ

Common questions about DeepSeek R1 Zero vs Phi 4 Reasoning

DeepSeek R1 Zero shows notably better performance in the majority of benchmarks. DeepSeek R1 Zero is made by DeepSeek and Phi 4 Reasoning is made by Microsoft. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek R1 Zero scores MATH-500: 95.9%, AIME 2024: 86.7%, GPQA: 73.3%, LiveCodeBench: 50.0%. Phi 4 Reasoning scores FlenQA: 97.7%, HumanEval+: 92.9%, IFEval: 83.4%, OmniMath: 76.6%, AIME 2024: 75.3%.
DeepSeek R1 Zero is developed by DeepSeek and Phi 4 Reasoning is developed by Microsoft.