Model Comparison

DeepSeek R1 Distill Qwen 1.5B vs DeepSeek R1 Distill Qwen 32B

DeepSeek R1 Distill Qwen 32B significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

DeepSeek R1 Distill Qwen 1.5B outperforms in 0 benchmarks, while DeepSeek R1 Distill Qwen 32B is better at 4 benchmarks (AIME 2024, GPQA, LiveCodeBench, MATH-500).

DeepSeek R1 Distill Qwen 32B significantly outperforms across most benchmarks.

Wed Apr 01 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 01 2026 • llm-stats.com
DeepSeek
DeepSeek R1 Distill Qwen 1.5B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
DeepSeek
DeepSeek R1 Distill Qwen 32B
Input tokens$0.12
Output tokens$0.18
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

31.0B diff

DeepSeek R1 Distill Qwen 32B has 31.0B more parameters than DeepSeek R1 Distill Qwen 1.5B, making it 1742.7% larger.

DeepSeek
DeepSeek R1 Distill Qwen 1.5B
1.8Bparameters
DeepSeek
DeepSeek R1 Distill Qwen 32B
32.8Bparameters
1.8B
DeepSeek R1 Distill Qwen 1.5B
32.8B
DeepSeek R1 Distill Qwen 32B

Context Window

Maximum input and output token capacity

Only DeepSeek R1 Distill Qwen 32B specifies input context (128,000 tokens). Only DeepSeek R1 Distill Qwen 32B specifies output context (128,000 tokens).

DeepSeek
DeepSeek R1 Distill Qwen 1.5B
Input- tokens
Output- tokens
DeepSeek
DeepSeek R1 Distill Qwen 32B
Input128,000 tokens
Output128,000 tokens
Wed Apr 01 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

DeepSeek R1 Distill Qwen 1.5B

MIT

Open weights

DeepSeek R1 Distill Qwen 32B

MIT

Open weights

Release Timeline

When each model was launched

Both models were released on 2025-01-20.

They likely represent similar generations of model development.

DeepSeek R1 Distill Qwen 1.5B

Jan 20, 2025

1.2 years ago

DeepSeek R1 Distill Qwen 32B

Jan 20, 2025

1.2 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (128,000 tokens)
Higher AIME 2024 score (83.3% vs 52.7%)
Higher GPQA score (62.1% vs 33.8%)
Higher LiveCodeBench score (57.2% vs 16.9%)
Higher MATH-500 score (94.3% vs 83.9%)

Detailed Comparison

FAQ

Common questions about DeepSeek R1 Distill Qwen 1.5B vs DeepSeek R1 Distill Qwen 32B

DeepSeek R1 Distill Qwen 32B significantly outperforms across most benchmarks. DeepSeek R1 Distill Qwen 1.5B is made by DeepSeek and DeepSeek R1 Distill Qwen 32B is made by DeepSeek. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek R1 Distill Qwen 1.5B scores MATH-500: 83.9%, AIME 2024: 52.7%, GPQA: 33.8%, LiveCodeBench: 16.9%. DeepSeek R1 Distill Qwen 32B scores MATH-500: 94.3%, AIME 2024: 83.3%, GPQA: 62.1%, LiveCodeBench: 57.2%.
DeepSeek R1 Distill Qwen 1.5B supports an unknown number of tokens and DeepSeek R1 Distill Qwen 32B supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.