Model Comparison

Jamba 1.5 Large vs Qwen2.5-Coder 32B Instruct

Jamba 1.5 Large significantly outperforms across most benchmarks. Qwen2.5-Coder 32B Instruct is 38.9x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

5 benchmarks

Jamba 1.5 Large outperforms in 4 benchmarks (ARC-C, MMLU, MMLU-Pro, TruthfulQA), while Qwen2.5-Coder 32B Instruct is better at 1 benchmark (GSM8k).

Jamba 1.5 Large significantly outperforms across most benchmarks.

Fri May 01 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Qwen2.5-Coder 32B Instruct costs less

For input processing, Jamba 1.5 Large ($2.00/1M tokens) is 22.2x more expensive than Qwen2.5-Coder 32B Instruct ($0.09/1M tokens).

For output processing, Jamba 1.5 Large ($8.00/1M tokens) is 88.9x more expensive than Qwen2.5-Coder 32B Instruct ($0.09/1M tokens).

In conclusion, Jamba 1.5 Large is more expensive than Qwen2.5-Coder 32B Instruct.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Fri May 01 2026 • llm-stats.com
AI21 Labs
Jamba 1.5 Large
Input tokens$2.00
Output tokens$8.00
Best providerAWS Bedrock
Alibaba Cloud / Qwen Team
Qwen2.5-Coder 32B Instruct
Input tokens$0.09
Output tokens$0.09
Best providerLambda
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

366.0B diff

Jamba 1.5 Large has 366.0B more parameters than Qwen2.5-Coder 32B Instruct, making it 1143.8% larger.

AI21 Labs
Jamba 1.5 Large
398.0Bparameters
Alibaba Cloud / Qwen Team
Qwen2.5-Coder 32B Instruct
32.0Bparameters
398.0B
Jamba 1.5 Large
32.0B
Qwen2.5-Coder 32B Instruct

Context Window

Maximum input and output token capacity

Jamba 1.5 Large accepts 256,000 input tokens compared to Qwen2.5-Coder 32B Instruct's 128,000 tokens. Jamba 1.5 Large can generate longer responses up to 256,000 tokens, while Qwen2.5-Coder 32B Instruct is limited to 128,000 tokens.

AI21 Labs
Jamba 1.5 Large
Input256,000 tokens
Output256,000 tokens
Alibaba Cloud / Qwen Team
Qwen2.5-Coder 32B Instruct
Input128,000 tokens
Output128,000 tokens
Fri May 01 2026 • llm-stats.com

License

Usage and distribution terms

Jamba 1.5 Large is licensed under Jamba Open Model License, while Qwen2.5-Coder 32B Instruct uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Jamba 1.5 Large

Jamba Open Model License

Open weights

Qwen2.5-Coder 32B Instruct

Apache 2.0

Open weights

Release Timeline

When each model was launched

Jamba 1.5 Large was released on 2024-08-22, while Qwen2.5-Coder 32B Instruct was released on 2024-09-19.

Qwen2.5-Coder 32B Instruct is 1 month newer than Jamba 1.5 Large.

Jamba 1.5 Large

Aug 22, 2024

1.7 years ago

Qwen2.5-Coder 32B Instruct

Sep 19, 2024

1.6 years ago

4w newer

Knowledge Cutoff

When training data ends

Jamba 1.5 Large has a documented knowledge cutoff of 2024-03-05, while Qwen2.5-Coder 32B Instruct's cutoff date is not specified.

We can confirm Jamba 1.5 Large's training data extends to 2024-03-05, but cannot make a direct comparison without Qwen2.5-Coder 32B Instruct's cutoff date.

Jamba 1.5 Large

Mar 2024

Qwen2.5-Coder 32B Instruct

Provider Availability

Jamba 1.5 Large is available from Bedrock, Google. Qwen2.5-Coder 32B Instruct is available from Lambda, DeepInfra, Hyperbolic, Fireworks.

Jamba 1.5 Large

bedrock logo
AWS Bedrock
Input Price:Input: $2.00/1MOutput Price:Output: $8.00/1M
google logo
Google
Input Price:Input: $2.00/1MOutput Price:Output: $8.00/1M

Qwen2.5-Coder 32B Instruct

lambda logo
Lambda
Input Price:Input: $0.09/1MOutput Price:Output: $0.09/1M
deepinfra logo
Deepinfra
Input Price:Input: $0.18/1MOutput Price:Output: $0.18/1M
hyperbolic logo
Hyperbolic
Input Price:Input: $0.20/1MOutput Price:Output: $0.20/1M
fireworks logo
Fireworks
Input Price:Input: $0.89/1MOutput Price:Output: $0.89/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (256,000 tokens)
Higher ARC-C score (93.0% vs 70.5%)
Higher MMLU score (81.2% vs 75.1%)
Higher MMLU-Pro score (53.5% vs 50.4%)
Higher TruthfulQA score (58.3% vs 54.2%)
Less expensive input tokens
Less expensive output tokens
Higher GSM8k score (91.1% vs 87.0%)

Detailed Comparison

AI Model Comparison Table
Feature
AI21 Labs
Jamba 1.5 Large
Alibaba Cloud / Qwen Team
Qwen2.5-Coder 32B Instruct

FAQ

Common questions about Jamba 1.5 Large vs Qwen2.5-Coder 32B Instruct

Jamba 1.5 Large significantly outperforms across most benchmarks. Jamba 1.5 Large is made by AI21 Labs and Qwen2.5-Coder 32B Instruct is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Jamba 1.5 Large scores ARC-C: 93.0%, GSM8k: 87.0%, MMLU: 81.2%, Arena Hard: 65.4%, TruthfulQA: 58.3%. Qwen2.5-Coder 32B Instruct scores HumanEval: 92.7%, GSM8k: 91.1%, MBPP: 90.2%, HellaSwag: 83.0%, Winogrande: 80.8%.
Qwen2.5-Coder 32B Instruct is 22.2x cheaper for input tokens. Jamba 1.5 Large costs $2.00/M input and $8.00/M output via bedrock. Qwen2.5-Coder 32B Instruct costs $0.09/M input and $0.09/M output via lambda.
Jamba 1.5 Large supports 256K tokens and Qwen2.5-Coder 32B Instruct supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (256K vs 128K), input pricing ($2.00 vs $0.09/M), licensing (Jamba Open Model License vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
Jamba 1.5 Large is developed by AI21 Labs and Qwen2.5-Coder 32B Instruct is developed by Alibaba Cloud / Qwen Team.