DeepSeek R1 Distill Qwen 1.5B vs GPT-4 Turbo

Comparing DeepSeek R1 Distill Qwen 1.5B by DeepSeek and GPT-4 Turbo by OpenAI across benchmarks, pricing, and capabilities.

GPT-4 Turbo significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

DeepSeek R1 Distill Qwen 1.5B outperforms in 0 benchmarks, while GPT-4 Turbo is better at 1 benchmark (GPQA).

GPT-4 Turbo significantly outperforms across most benchmarks.

Tue Mar 24 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Tue Mar 24 2026 • llm-stats.com
DeepSeek
DeepSeek R1 Distill Qwen 1.5B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
OpenAI
GPT-4 Turbo
Input tokens$10.00
Output tokens$30.00
Best providerAzure
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only GPT-4 Turbo specifies input context (128,000 tokens). Only GPT-4 Turbo specifies output context (4,096 tokens).

DeepSeek
DeepSeek R1 Distill Qwen 1.5B
Input- tokens
Output- tokens
OpenAI
GPT-4 Turbo
Input128,000 tokens
Output4,096 tokens
Tue Mar 24 2026 • llm-stats.com

License

Usage and distribution terms

DeepSeek R1 Distill Qwen 1.5B is licensed under MIT, while GPT-4 Turbo uses a proprietary license.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek R1 Distill Qwen 1.5B

MIT

Open weights

GPT-4 Turbo

Proprietary

Closed source

Release Timeline

When each model was launched

DeepSeek R1 Distill Qwen 1.5B was released on 2025-01-20, while GPT-4 Turbo was released on 2024-04-09.

DeepSeek R1 Distill Qwen 1.5B is 10 months newer than GPT-4 Turbo.

DeepSeek R1 Distill Qwen 1.5B

Jan 20, 2025

1.2 years ago

9mo newer
GPT-4 Turbo

Apr 9, 2024

2.0 years ago

Knowledge Cutoff

When training data ends

GPT-4 Turbo has a documented knowledge cutoff of 2023-12-31, while DeepSeek R1 Distill Qwen 1.5B's cutoff date is not specified.

We can confirm GPT-4 Turbo's training data extends to 2023-12-31, but cannot make a direct comparison without DeepSeek R1 Distill Qwen 1.5B's cutoff date.

DeepSeek R1 Distill Qwen 1.5B

GPT-4 Turbo

Dec 2023

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (128,000 tokens)
Higher GPQA score (48.0% vs 33.8%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek R1 Distill Qwen 1.5B
OpenAI
GPT-4 Turbo