Claude 3 Haiku vs Qwen2.5-Coder 32B Instruct Comparison

Comparing Claude 3 Haiku and Qwen2.5-Coder 32B Instruct across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

6 benchmarks

Claude 3 Haiku outperforms in 3 benchmarks (ARC-C, HellaSwag, MMLU), while Qwen2.5-Coder 32B Instruct is better at 3 benchmarks (GSM8k, HumanEval, MATH).

Both models are evenly matched across the benchmarks.

Sun Mar 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Qwen2.5-Coder 32B Instruct costs less

For input processing, Claude 3 Haiku ($0.25/1M tokens) is 2.8x more expensive than Qwen2.5-Coder 32B Instruct ($0.09/1M tokens).

For output processing, Claude 3 Haiku ($1.25/1M tokens) is 13.9x more expensive than Qwen2.5-Coder 32B Instruct ($0.09/1M tokens).

In conclusion, Claude 3 Haiku is more expensive than Qwen2.5-Coder 32B Instruct.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Sun Mar 15 2026 • llm-stats.com
Anthropic
Claude 3 Haiku
Input tokens$0.25
Output tokens$1.25
Best providerAnthropic
Alibaba Cloud / Qwen Team
Qwen2.5-Coder 32B Instruct
Input tokens$0.09
Output tokens$0.09
Best providerLambda
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Claude 3 Haiku accepts 200,000 input tokens compared to Qwen2.5-Coder 32B Instruct's 128,000 tokens. Claude 3 Haiku can generate longer responses up to 200,000 tokens, while Qwen2.5-Coder 32B Instruct is limited to 128,000 tokens.

Anthropic
Claude 3 Haiku
Input200,000 tokens
Output200,000 tokens
Alibaba Cloud / Qwen Team
Qwen2.5-Coder 32B Instruct
Input128,000 tokens
Output128,000 tokens
Sun Mar 15 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Claude 3 Haiku supports multimodal inputs, whereas Qwen2.5-Coder 32B Instruct does not.

Claude 3 Haiku can handle both text and other forms of data like images, making it suitable for multimodal applications.

Claude 3 Haiku

Text
Images
Audio
Video

Qwen2.5-Coder 32B Instruct

Text
Images
Audio
Video

License

Usage and distribution terms

Claude 3 Haiku is licensed under a proprietary license, while Qwen2.5-Coder 32B Instruct uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Claude 3 Haiku

Proprietary

Closed source

Qwen2.5-Coder 32B Instruct

Apache 2.0

Open weights

Release Timeline

When each model was launched

Claude 3 Haiku was released on 2024-03-13, while Qwen2.5-Coder 32B Instruct was released on 2024-09-19.

Qwen2.5-Coder 32B Instruct is 6 months newer than Claude 3 Haiku.

Claude 3 Haiku

Mar 13, 2024

2.0 years ago

Qwen2.5-Coder 32B Instruct

Sep 19, 2024

1.5 years ago

6mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Claude 3 Haiku is available from Anthropic, Bedrock, Google. Qwen2.5-Coder 32B Instruct is available from Lambda, DeepInfra, Hyperbolic, Fireworks. The availability of providers can affect quality of the model and reliability.

Claude 3 Haiku

anthropic logo
Anthropic
Input Price:Input: $0.25/1MOutput Price:Output: $1.25/1M
bedrock logo
AWS Bedrock
Input Price:Input: $0.25/1MOutput Price:Output: $1.25/1M
google logo
Google
Input Price:Input: $0.25/1MOutput Price:Output: $1.25/1M

Qwen2.5-Coder 32B Instruct

lambda logo
Lambda
Input Price:Input: $0.09/1MOutput Price:Output: $0.09/1M
deepinfra logo
Deepinfra
Input Price:Input: $0.18/1MOutput Price:Output: $0.18/1M
hyperbolic logo
Hyperbolic
Input Price:Input: $0.20/1MOutput Price:Output: $0.20/1M
fireworks logo
Fireworks
Input Price:Input: $0.89/1MOutput Price:Output: $0.89/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (200,000 tokens)
Supports multimodal inputs
Higher ARC-C score (89.2% vs 70.5%)
Higher HellaSwag score (85.9% vs 83.0%)
Higher MMLU score (75.2% vs 75.1%)
Less expensive input tokens
Less expensive output tokens
Has open weights
Higher GSM8k score (91.1% vs 88.9%)
Higher HumanEval score (92.7% vs 75.9%)
Higher MATH score (57.2% vs 38.9%)

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude 3 Haiku
Alibaba Cloud / Qwen Team
Qwen2.5-Coder 32B Instruct