DeepSeek R1 Distill Qwen 7B vs DeepSeek R1 Distill Llama 8B Comparison

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

DeepSeek R1 Distill Qwen 7B outperforms in 3 benchmarks (AIME 2024, GPQA, MATH-500), while DeepSeek R1 Distill Llama 8B is better at 1 benchmark (LiveCodeBench).

DeepSeek R1 Distill Qwen 7B shows notably better performance in the majority of benchmarks.

Tue Mar 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Tue Mar 17 2026 • llm-stats.com
DeepSeek
DeepSeek R1 Distill Qwen 7B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
DeepSeek
DeepSeek R1 Distill Llama 8B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

410.0M diff

DeepSeek R1 Distill Llama 8B has 0.4B more parameters than DeepSeek R1 Distill Qwen 7B, making it 5.4% larger.

DeepSeek
DeepSeek R1 Distill Qwen 7B
7.6Bparameters
DeepSeek
DeepSeek R1 Distill Llama 8B
8.0Bparameters
7.6B
DeepSeek R1 Distill Qwen 7B
8.0B
DeepSeek R1 Distill Llama 8B

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

DeepSeek R1 Distill Qwen 7B

MIT

Open weights

DeepSeek R1 Distill Llama 8B

MIT

Open weights

Release Timeline

When each model was launched

Both models were released on 2025-01-20.

They likely represent similar generations of model development.

DeepSeek R1 Distill Qwen 7B

Jan 20, 2025

1.2 years ago

DeepSeek R1 Distill Llama 8B

Jan 20, 2025

1.2 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher AIME 2024 score (83.3% vs 80.0%)
Higher GPQA score (49.1% vs 49.0%)
Higher MATH-500 score (92.8% vs 89.1%)
Higher LiveCodeBench score (39.6% vs 37.6%)

Detailed Comparison