Model Comparison

Jamba 1.5 Large vs Magistral Small 2506

Magistral Small 2506 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

Jamba 1.5 Large outperforms in 0 benchmarks, while Magistral Small 2506 is better at 1 benchmark (GPQA).

Magistral Small 2506 significantly outperforms across most benchmarks.

Wed Apr 22 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 22 2026 • llm-stats.com
AI21 Labs
Jamba 1.5 Large
Input tokens$2.00
Output tokens$8.00
Best providerAWS Bedrock
Mistral AI
Magistral Small 2506
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

374.0B diff

Jamba 1.5 Large has 374.0B more parameters than Magistral Small 2506, making it 1558.3% larger.

AI21 Labs
Jamba 1.5 Large
398.0Bparameters
Mistral AI
Magistral Small 2506
24.0Bparameters
398.0B
Jamba 1.5 Large
24.0B
Magistral Small 2506

Context Window

Maximum input and output token capacity

Only Jamba 1.5 Large specifies input context (256,000 tokens). Only Jamba 1.5 Large specifies output context (256,000 tokens).

AI21 Labs
Jamba 1.5 Large
Input256,000 tokens
Output256,000 tokens
Mistral AI
Magistral Small 2506
Input- tokens
Output- tokens
Wed Apr 22 2026 • llm-stats.com

License

Usage and distribution terms

Jamba 1.5 Large is licensed under Jamba Open Model License, while Magistral Small 2506 uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Jamba 1.5 Large

Jamba Open Model License

Open weights

Magistral Small 2506

Apache 2.0

Open weights

Release Timeline

When each model was launched

Jamba 1.5 Large was released on 2024-08-22, while Magistral Small 2506 was released on 2025-06-10.

Magistral Small 2506 is 10 months newer than Jamba 1.5 Large.

Jamba 1.5 Large

Aug 22, 2024

1.7 years ago

Magistral Small 2506

Jun 10, 2025

10 months ago

9mo newer

Knowledge Cutoff

When training data ends

Jamba 1.5 Large has a knowledge cutoff of 2024-03-05, while Magistral Small 2506 has a cutoff of 2025-06-01.

Magistral Small 2506 has more recent training data (up to 2025-06-01), making it potentially better informed about events through that date compared to Jamba 1.5 Large (2024-03-05).

Jamba 1.5 Large

Mar 2024

Magistral Small 2506

Jun 2025

1.3 yr newer

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (256,000 tokens)
Higher GPQA score (68.2% vs 36.9%)

Detailed Comparison

AI Model Comparison Table
Feature
AI21 Labs
Jamba 1.5 Large
Mistral AI
Magistral Small 2506

FAQ

Common questions about Jamba 1.5 Large vs Magistral Small 2506

Magistral Small 2506 significantly outperforms across most benchmarks. Jamba 1.5 Large is made by AI21 Labs and Magistral Small 2506 is made by Mistral AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Jamba 1.5 Large scores ARC-C: 93.0%, GSM8k: 87.0%, MMLU: 81.2%, Arena Hard: 65.4%, TruthfulQA: 58.3%. Magistral Small 2506 scores AIME 2024: 70.7%, GPQA: 68.2%, AIME 2025: 62.8%, LiveCodeBench: 51.3%.
Jamba 1.5 Large supports 256K tokens and Magistral Small 2506 supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (Jamba Open Model License vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
Jamba 1.5 Large is developed by AI21 Labs and Magistral Small 2506 is developed by Mistral AI.