Model Comparison

DeepSeek-V3 0324 vs Jamba 1.5 Mini

DeepSeek-V3 0324 significantly outperforms across most benchmarks. Jamba 1.5 Mini is 2.0x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

DeepSeek-V3 0324 outperforms in 2 benchmarks (GPQA, MMLU-Pro), while Jamba 1.5 Mini is better at 0 benchmarks.

DeepSeek-V3 0324 significantly outperforms across most benchmarks.

Fri Apr 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Jamba 1.5 Mini costs less

For input processing, DeepSeek-V3 0324 ($0.28/1M tokens) is 1.4x more expensive than Jamba 1.5 Mini ($0.20/1M tokens).

For output processing, DeepSeek-V3 0324 ($1.14/1M tokens) is 2.8x more expensive than Jamba 1.5 Mini ($0.40/1M tokens).

In conclusion, DeepSeek-V3 0324 is more expensive than Jamba 1.5 Mini.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Fri Apr 17 2026 • llm-stats.com
DeepSeek
DeepSeek-V3 0324
Input tokens$0.28
Output tokens$1.14
Best providerNovita
AI21 Labs
Jamba 1.5 Mini
Input tokens$0.20
Output tokens$0.40
Best providerAWS Bedrock
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

619.0B diff

DeepSeek-V3 0324 has 619.0B more parameters than Jamba 1.5 Mini, making it 1190.4% larger.

DeepSeek
DeepSeek-V3 0324
671.0Bparameters
AI21 Labs
Jamba 1.5 Mini
52.0Bparameters
671.0B
DeepSeek-V3 0324
52.0B
Jamba 1.5 Mini

Context Window

Maximum input and output token capacity

Jamba 1.5 Mini accepts 256,144 input tokens compared to DeepSeek-V3 0324's 163,840 tokens. Jamba 1.5 Mini can generate longer responses up to 256,144 tokens, while DeepSeek-V3 0324 is limited to 163,840 tokens.

DeepSeek
DeepSeek-V3 0324
Input163,840 tokens
Output163,840 tokens
AI21 Labs
Jamba 1.5 Mini
Input256,144 tokens
Output256,144 tokens
Fri Apr 17 2026 • llm-stats.com

License

Usage and distribution terms

DeepSeek-V3 0324 is licensed under MIT + Model License (Commercial use allowed), while Jamba 1.5 Mini uses Jamba Open Model License.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-V3 0324

MIT + Model License (Commercial use allowed)

Open weights

Jamba 1.5 Mini

Jamba Open Model License

Open weights

Release Timeline

When each model was launched

DeepSeek-V3 0324 was released on 2025-03-25, while Jamba 1.5 Mini was released on 2024-08-22.

DeepSeek-V3 0324 is 7 months newer than Jamba 1.5 Mini.

DeepSeek-V3 0324

Mar 25, 2025

1.1 years ago

7mo newer
Jamba 1.5 Mini

Aug 22, 2024

1.7 years ago

Knowledge Cutoff

When training data ends

Jamba 1.5 Mini has a documented knowledge cutoff of 2024-03-05, while DeepSeek-V3 0324's cutoff date is not specified.

We can confirm Jamba 1.5 Mini's training data extends to 2024-03-05, but cannot make a direct comparison without DeepSeek-V3 0324's cutoff date.

DeepSeek-V3 0324

Jamba 1.5 Mini

Mar 2024

Provider Availability

DeepSeek-V3 0324 is available from Novita. Jamba 1.5 Mini is available from Bedrock, Google.

DeepSeek-V3 0324

novita logo
Novita
Input Price:Input: $0.28/1MOutput Price:Output: $1.14/1M

Jamba 1.5 Mini

bedrock logo
AWS Bedrock
Input Price:Input: $0.20/1MOutput Price:Output: $0.40/1M
google logo
Google
Input Price:Input: $0.20/1MOutput Price:Output: $0.40/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher GPQA score (68.4% vs 32.3%)
Higher MMLU-Pro score (81.2% vs 42.5%)
Larger context window (256,144 tokens)
Less expensive input tokens
Less expensive output tokens

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-V3 0324
AI21 Labs
Jamba 1.5 Mini

FAQ

Common questions about DeepSeek-V3 0324 vs Jamba 1.5 Mini

DeepSeek-V3 0324 significantly outperforms across most benchmarks. DeepSeek-V3 0324 is made by DeepSeek and Jamba 1.5 Mini is made by AI21 Labs. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-V3 0324 scores MATH-500: 94.0%, MMLU-Pro: 81.2%, GPQA: 68.4%, AIME 2024: 59.4%, LiveCodeBench: 49.2%. Jamba 1.5 Mini scores ARC-C: 85.7%, GSM8k: 75.8%, MMLU: 69.7%, TruthfulQA: 54.1%, Arena Hard: 46.1%.
Jamba 1.5 Mini is 1.4x cheaper for input tokens. DeepSeek-V3 0324 costs $0.28/M input and $1.14/M output via novita. Jamba 1.5 Mini costs $0.20/M input and $0.40/M output via bedrock.
DeepSeek-V3 0324 supports 164K tokens and Jamba 1.5 Mini supports 256K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (164K vs 256K), input pricing ($0.28 vs $0.20/M), licensing (MIT + Model License (Commercial use allowed) vs Jamba Open Model License). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-V3 0324 is developed by DeepSeek and Jamba 1.5 Mini is developed by AI21 Labs.