Model Comparison

Jamba 1.5 Mini vs Phi 4

Phi 4 significantly outperforms across most benchmarks. Phi 4 is 2.9x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

Jamba 1.5 Mini outperforms in 0 benchmarks, while Phi 4 is better at 4 benchmarks (Arena Hard, GPQA, MMLU, MMLU-Pro).

Phi 4 significantly outperforms across most benchmarks.

Wed Apr 22 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Phi 4 costs less

For input processing, Jamba 1.5 Mini ($0.20/1M tokens) is 2.9x more expensive than Phi 4 ($0.07/1M tokens).

For output processing, Jamba 1.5 Mini ($0.40/1M tokens) is 2.9x more expensive than Phi 4 ($0.14/1M tokens).

In conclusion, Jamba 1.5 Mini is more expensive than Phi 4.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Wed Apr 22 2026 • llm-stats.com
AI21 Labs
Jamba 1.5 Mini
Input tokens$0.20
Output tokens$0.40
Best providerAWS Bedrock
Microsoft
Phi 4
Input tokens$0.07
Output tokens$0.14
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

37.3B diff

Jamba 1.5 Mini has 37.3B more parameters than Phi 4, making it 253.7% larger.

AI21 Labs
Jamba 1.5 Mini
52.0Bparameters
Microsoft
Phi 4
14.7Bparameters
52.0B
Jamba 1.5 Mini
14.7B
Phi 4

Context Window

Maximum input and output token capacity

Jamba 1.5 Mini accepts 256,144 input tokens compared to Phi 4's 16,000 tokens. Jamba 1.5 Mini can generate longer responses up to 256,144 tokens, while Phi 4 is limited to 16,000 tokens.

AI21 Labs
Jamba 1.5 Mini
Input256,144 tokens
Output256,144 tokens
Microsoft
Phi 4
Input16,000 tokens
Output16,000 tokens
Wed Apr 22 2026 • llm-stats.com

License

Usage and distribution terms

Jamba 1.5 Mini is licensed under Jamba Open Model License, while Phi 4 uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Jamba 1.5 Mini

Jamba Open Model License

Open weights

Phi 4

MIT

Open weights

Release Timeline

When each model was launched

Jamba 1.5 Mini was released on 2024-08-22, while Phi 4 was released on 2024-12-12.

Phi 4 is 4 months newer than Jamba 1.5 Mini.

Jamba 1.5 Mini

Aug 22, 2024

1.7 years ago

Phi 4

Dec 12, 2024

1.4 years ago

3mo newer

Knowledge Cutoff

When training data ends

Jamba 1.5 Mini has a knowledge cutoff of 2024-03-05, while Phi 4 has a cutoff of 2024-06-01.

Phi 4 has more recent training data (up to 2024-06-01), making it potentially better informed about events through that date compared to Jamba 1.5 Mini (2024-03-05).

Jamba 1.5 Mini

Mar 2024

Phi 4

Jun 2024

3 mo newer

Provider Availability

Jamba 1.5 Mini is available from Bedrock, Google. Phi 4 is available from DeepInfra.

Jamba 1.5 Mini

bedrock logo
AWS Bedrock
Input Price:Input: $0.20/1MOutput Price:Output: $0.40/1M
google logo
Google
Input Price:Input: $0.20/1MOutput Price:Output: $0.40/1M

Phi 4

deepinfra logo
Deepinfra
Input Price:Input: $0.07/1MOutput Price:Output: $0.14/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (256,144 tokens)
Less expensive input tokens
Less expensive output tokens
Higher Arena Hard score (75.4% vs 46.1%)
Higher GPQA score (56.1% vs 32.3%)
Higher MMLU score (84.8% vs 69.7%)
Higher MMLU-Pro score (70.4% vs 42.5%)

Detailed Comparison

AI Model Comparison Table
Feature
AI21 Labs
Jamba 1.5 Mini
Microsoft
Phi 4

FAQ

Common questions about Jamba 1.5 Mini vs Phi 4

Phi 4 significantly outperforms across most benchmarks. Jamba 1.5 Mini is made by AI21 Labs and Phi 4 is made by Microsoft. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Jamba 1.5 Mini scores ARC-C: 85.7%, GSM8k: 75.8%, MMLU: 69.7%, TruthfulQA: 54.1%, Arena Hard: 46.1%. Phi 4 scores MMLU: 84.8%, HumanEval+: 82.8%, HumanEval: 82.6%, MGSM: 80.6%, MATH: 80.4%.
Phi 4 is 2.9x cheaper for input tokens. Jamba 1.5 Mini costs $0.20/M input and $0.40/M output via bedrock. Phi 4 costs $0.07/M input and $0.14/M output via deepinfra.
Jamba 1.5 Mini supports 256K tokens and Phi 4 supports 16K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (256K vs 16K), input pricing ($0.20 vs $0.07/M), licensing (Jamba Open Model License vs MIT). See the full comparison above for benchmark-by-benchmark results.
Jamba 1.5 Mini is developed by AI21 Labs and Phi 4 is developed by Microsoft.