Model Comparison

Jamba 1.5 Mini vs LongCat-Flash-Lite

LongCat-Flash-Lite significantly outperforms across most benchmarks. LongCat-Flash-Lite is 1.4x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

Jamba 1.5 Mini outperforms in 0 benchmarks, while LongCat-Flash-Lite is better at 3 benchmarks (GPQA, MMLU, MMLU-Pro).

LongCat-Flash-Lite significantly outperforms across most benchmarks.

Wed Apr 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

LongCat-Flash-Lite costs less

For input processing, Jamba 1.5 Mini ($0.20/1M tokens) is 2.0x more expensive than LongCat-Flash-Lite ($0.10/1M tokens).

For output processing, Jamba 1.5 Mini ($0.40/1M tokens) costs the same as LongCat-Flash-Lite ($0.40/1M tokens).

In conclusion, Jamba 1.5 Mini is more expensive than LongCat-Flash-Lite.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Wed Apr 15 2026 • llm-stats.com
AI21 Labs
Jamba 1.5 Mini
Input tokens$0.20
Output tokens$0.40
Best providerAWS Bedrock
Meituan
LongCat-Flash-Lite
Input tokens$0.10
Output tokens$0.40
Best providerMeituan
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

16.5B diff

LongCat-Flash-Lite has 16.5B more parameters than Jamba 1.5 Mini, making it 31.7% larger.

AI21 Labs
Jamba 1.5 Mini
52.0Bparameters
Meituan
LongCat-Flash-Lite
68.5Bparameters
52.0B
Jamba 1.5 Mini
68.5B
LongCat-Flash-Lite

Context Window

Maximum input and output token capacity

Jamba 1.5 Mini accepts 256,144 input tokens compared to LongCat-Flash-Lite's 256,000 tokens. Jamba 1.5 Mini can generate longer responses up to 256,144 tokens, while LongCat-Flash-Lite is limited to 128,000 tokens.

AI21 Labs
Jamba 1.5 Mini
Input256,144 tokens
Output256,144 tokens
Meituan
LongCat-Flash-Lite
Input256,000 tokens
Output128,000 tokens
Wed Apr 15 2026 • llm-stats.com

License

Usage and distribution terms

Jamba 1.5 Mini is licensed under Jamba Open Model License, while LongCat-Flash-Lite uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Jamba 1.5 Mini

Jamba Open Model License

Open weights

LongCat-Flash-Lite

MIT

Open weights

Release Timeline

When each model was launched

Jamba 1.5 Mini was released on 2024-08-22, while LongCat-Flash-Lite was released on 2026-02-05.

LongCat-Flash-Lite is 18 months newer than Jamba 1.5 Mini.

Jamba 1.5 Mini

Aug 22, 2024

1.6 years ago

LongCat-Flash-Lite

Feb 5, 2026

2 months ago

1.5yr newer

Knowledge Cutoff

When training data ends

Jamba 1.5 Mini has a documented knowledge cutoff of 2024-03-05, while LongCat-Flash-Lite's cutoff date is not specified.

We can confirm Jamba 1.5 Mini's training data extends to 2024-03-05, but cannot make a direct comparison without LongCat-Flash-Lite's cutoff date.

Jamba 1.5 Mini

Mar 2024

LongCat-Flash-Lite

Provider Availability

Jamba 1.5 Mini is available from Bedrock, Google. LongCat-Flash-Lite is available from Meituan.

Jamba 1.5 Mini

bedrock logo
AWS Bedrock
Input Price:Input: $0.20/1MOutput Price:Output: $0.40/1M
google logo
Google
Input Price:Input: $0.20/1MOutput Price:Output: $0.40/1M

LongCat-Flash-Lite

meituan logo
Meituan
Input Price:Input: $0.10/1MOutput Price:Output: $0.40/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (256,144 tokens)
Less expensive input tokens
Higher GPQA score (66.8% vs 32.3%)
Higher MMLU score (85.5% vs 69.7%)
Higher MMLU-Pro score (78.3% vs 42.5%)

Detailed Comparison

AI Model Comparison Table
Feature
AI21 Labs
Jamba 1.5 Mini
Meituan
LongCat-Flash-Lite

FAQ

Common questions about Jamba 1.5 Mini vs LongCat-Flash-Lite

LongCat-Flash-Lite significantly outperforms across most benchmarks. Jamba 1.5 Mini is made by AI21 Labs and LongCat-Flash-Lite is made by Meituan. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Jamba 1.5 Mini scores ARC-C: 85.7%, GSM8k: 75.8%, MMLU: 69.7%, TruthfulQA: 54.1%, Arena Hard: 46.1%. LongCat-Flash-Lite scores MATH-500: 96.8%, MMLU: 85.5%, CMMLU: 82.5%, MMLU-Pro: 78.3%, Tau2 Retail: 73.1%.
LongCat-Flash-Lite is 2.0x cheaper for input tokens. Jamba 1.5 Mini costs $0.20/M input and $0.40/M output via bedrock. LongCat-Flash-Lite costs $0.10/M input and $0.40/M output via meituan.
Jamba 1.5 Mini supports 256K tokens and LongCat-Flash-Lite supports 256K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (256K vs 256K), input pricing ($0.20 vs $0.10/M), licensing (Jamba Open Model License vs MIT). See the full comparison above for benchmark-by-benchmark results.
Jamba 1.5 Mini is developed by AI21 Labs and LongCat-Flash-Lite is developed by Meituan.