Model Comparison

Jamba 1.5 Large vs Phi-3.5-mini-instruct

Jamba 1.5 Large significantly outperforms across most benchmarks. Phi-3.5-mini-instruct is 35.0x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

7 benchmarks

Jamba 1.5 Large outperforms in 6 benchmarks (ARC-C, Arena Hard, GPQA, GSM8k, MMLU, MMLU-Pro), while Phi-3.5-mini-instruct is better at 1 benchmark (TruthfulQA).

Jamba 1.5 Large significantly outperforms across most benchmarks.

Tue Apr 21 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Phi-3.5-mini-instruct costs less

For input processing, Jamba 1.5 Large ($2.00/1M tokens) is 20.0x more expensive than Phi-3.5-mini-instruct ($0.10/1M tokens).

For output processing, Jamba 1.5 Large ($8.00/1M tokens) is 80.0x more expensive than Phi-3.5-mini-instruct ($0.10/1M tokens).

In conclusion, Jamba 1.5 Large is more expensive than Phi-3.5-mini-instruct.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Tue Apr 21 2026 • llm-stats.com
AI21 Labs
Jamba 1.5 Large
Input tokens$2.00
Output tokens$8.00
Best providerAWS Bedrock
Microsoft
Phi-3.5-mini-instruct
Input tokens$0.10
Output tokens$0.10
Best providerAzure
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

394.2B diff

Jamba 1.5 Large has 394.2B more parameters than Phi-3.5-mini-instruct, making it 10373.7% larger.

AI21 Labs
Jamba 1.5 Large
398.0Bparameters
Microsoft
Phi-3.5-mini-instruct
3.8Bparameters
398.0B
Jamba 1.5 Large
3.8B
Phi-3.5-mini-instruct

Context Window

Maximum input and output token capacity

Jamba 1.5 Large accepts 256,000 input tokens compared to Phi-3.5-mini-instruct's 128,000 tokens. Jamba 1.5 Large can generate longer responses up to 256,000 tokens, while Phi-3.5-mini-instruct is limited to 128,000 tokens.

AI21 Labs
Jamba 1.5 Large
Input256,000 tokens
Output256,000 tokens
Microsoft
Phi-3.5-mini-instruct
Input128,000 tokens
Output128,000 tokens
Tue Apr 21 2026 • llm-stats.com

License

Usage and distribution terms

Jamba 1.5 Large is licensed under Jamba Open Model License, while Phi-3.5-mini-instruct uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Jamba 1.5 Large

Jamba Open Model License

Open weights

Phi-3.5-mini-instruct

MIT

Open weights

Release Timeline

When each model was launched

Jamba 1.5 Large was released on 2024-08-22, while Phi-3.5-mini-instruct was released on 2024-08-23.

Phi-3.5-mini-instruct is 0 month newer than Jamba 1.5 Large.

Jamba 1.5 Large

Aug 22, 2024

1.7 years ago

Phi-3.5-mini-instruct

Aug 23, 2024

1.7 years ago

1d newer

Knowledge Cutoff

When training data ends

Jamba 1.5 Large has a documented knowledge cutoff of 2024-03-05, while Phi-3.5-mini-instruct's cutoff date is not specified.

We can confirm Jamba 1.5 Large's training data extends to 2024-03-05, but cannot make a direct comparison without Phi-3.5-mini-instruct's cutoff date.

Jamba 1.5 Large

Mar 2024

Phi-3.5-mini-instruct

Provider Availability

Jamba 1.5 Large is available from Bedrock, Google. Phi-3.5-mini-instruct is available from Azure.

Jamba 1.5 Large

bedrock logo
AWS Bedrock
Input Price:Input: $2.00/1MOutput Price:Output: $8.00/1M
google logo
Google
Input Price:Input: $2.00/1MOutput Price:Output: $8.00/1M

Phi-3.5-mini-instruct

azure logo
Azure
Input Price:Input: $0.10/1MOutput Price:Output: $0.10/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (256,000 tokens)
Higher ARC-C score (93.0% vs 84.6%)
Higher Arena Hard score (65.4% vs 37.0%)
Higher GPQA score (36.9% vs 30.4%)
Higher GSM8k score (87.0% vs 86.2%)
Higher MMLU score (81.2% vs 69.0%)
Higher MMLU-Pro score (53.5% vs 47.4%)
Less expensive input tokens
Less expensive output tokens
Higher TruthfulQA score (64.0% vs 58.3%)

Detailed Comparison

AI Model Comparison Table
Feature
AI21 Labs
Jamba 1.5 Large
Microsoft
Phi-3.5-mini-instruct

FAQ

Common questions about Jamba 1.5 Large vs Phi-3.5-mini-instruct

Jamba 1.5 Large significantly outperforms across most benchmarks. Jamba 1.5 Large is made by AI21 Labs and Phi-3.5-mini-instruct is made by Microsoft. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Jamba 1.5 Large scores ARC-C: 93.0%, GSM8k: 87.0%, MMLU: 81.2%, Arena Hard: 65.4%, TruthfulQA: 58.3%. Phi-3.5-mini-instruct scores GSM8k: 86.2%, ARC-C: 84.6%, RULER: 84.1%, PIQA: 81.0%, OpenBookQA: 79.2%.
Phi-3.5-mini-instruct is 20.0x cheaper for input tokens. Jamba 1.5 Large costs $2.00/M input and $8.00/M output via bedrock. Phi-3.5-mini-instruct costs $0.10/M input and $0.10/M output via azure.
Jamba 1.5 Large supports 256K tokens and Phi-3.5-mini-instruct supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (256K vs 128K), input pricing ($2.00 vs $0.10/M), licensing (Jamba Open Model License vs MIT). See the full comparison above for benchmark-by-benchmark results.
Jamba 1.5 Large is developed by AI21 Labs and Phi-3.5-mini-instruct is developed by Microsoft.