Model Comparison

DeepSeek-R1-0528 vs LongCat-Flash-Chat

DeepSeek-R1-0528 shows notably better performance in the majority of benchmarks. LongCat-Flash-Chat is 1.7x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

6 benchmarks

DeepSeek-R1-0528 outperforms in 4 benchmarks (AIME 2025, GPQA, LiveCodeBench, MMLU-Pro), while LongCat-Flash-Chat is better at 2 benchmarks (SWE-Bench Verified, Terminal-Bench).

DeepSeek-R1-0528 shows notably better performance in the majority of benchmarks.

Wed Apr 15 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

LongCat-Flash-Chat costs less

For input processing, DeepSeek-R1-0528 ($0.50/1M tokens) is 1.7x more expensive than LongCat-Flash-Chat ($0.30/1M tokens).

For output processing, DeepSeek-R1-0528 ($2.15/1M tokens) is 1.8x more expensive than LongCat-Flash-Chat ($1.20/1M tokens).

In conclusion, DeepSeek-R1-0528 is more expensive than LongCat-Flash-Chat.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Wed Apr 15 2026 • llm-stats.com
DeepSeek
DeepSeek-R1-0528
Input tokens$0.50
Output tokens$2.15
Best providerDeepinfra
Meituan
LongCat-Flash-Chat
Input tokens$0.30
Output tokens$1.20
Best providerMeituan
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

111.0B diff

DeepSeek-R1-0528 has 111.0B more parameters than LongCat-Flash-Chat, making it 19.8% larger.

DeepSeek
DeepSeek-R1-0528
671.0Bparameters
Meituan
LongCat-Flash-Chat
560.0Bparameters
671.0B
DeepSeek-R1-0528
560.0B
LongCat-Flash-Chat

Context Window

Maximum input and output token capacity

DeepSeek-R1-0528 accepts 131,072 input tokens compared to LongCat-Flash-Chat's 128,000 tokens. DeepSeek-R1-0528 can generate longer responses up to 131,072 tokens, while LongCat-Flash-Chat is limited to 128,000 tokens.

DeepSeek
DeepSeek-R1-0528
Input131,072 tokens
Output131,072 tokens
Meituan
LongCat-Flash-Chat
Input128,000 tokens
Output128,000 tokens
Wed Apr 15 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under MIT.

Both models share the same licensing terms, providing consistent usage rights.

DeepSeek-R1-0528

MIT

Open weights

LongCat-Flash-Chat

MIT

Open weights

Release Timeline

When each model was launched

DeepSeek-R1-0528 was released on 2025-05-28, while LongCat-Flash-Chat was released on 2025-08-29.

LongCat-Flash-Chat is 3 months newer than DeepSeek-R1-0528.

DeepSeek-R1-0528

May 28, 2025

10 months ago

LongCat-Flash-Chat

Aug 29, 2025

7 months ago

3mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

DeepSeek-R1-0528 is available from DeepInfra, DeepSeek, Novita. LongCat-Flash-Chat is available from Meituan.

DeepSeek-R1-0528

deepinfra logo
Deepinfra
Input Price:Input: $0.50/1MOutput Price:Output: $2.15/1M
deepseek logo
DeepSeek
Input Price:Input: $0.55/1MOutput Price:Output: $2.19/1M
novita logo
Novita
Input Price:Input: $0.70/1MOutput Price:Output: $2.50/1M

LongCat-Flash-Chat

meituan logo
Meituan
Input Price:Input: $0.30/1MOutput Price:Output: $1.20/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (131,072 tokens)
Higher AIME 2025 score (87.5% vs 61.3%)
Higher GPQA score (81.0% vs 73.2%)
Higher LiveCodeBench score (73.3% vs 48.0%)
Higher MMLU-Pro score (85.0% vs 82.7%)
Less expensive input tokens
Less expensive output tokens
Higher SWE-Bench Verified score (60.4% vs 44.6%)
Higher Terminal-Bench score (39.5% vs 5.7%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-R1-0528
Meituan
LongCat-Flash-Chat

FAQ

Common questions about DeepSeek-R1-0528 vs LongCat-Flash-Chat

DeepSeek-R1-0528 shows notably better performance in the majority of benchmarks. DeepSeek-R1-0528 is made by DeepSeek and LongCat-Flash-Chat is made by Meituan. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-R1-0528 scores MMLU-Redux: 93.4%, SimpleQA: 92.3%, AIME 2024: 91.4%, AIME 2025: 87.5%, MMLU-Pro: 85.0%. LongCat-Flash-Chat scores MATH-500: 96.4%, MMLU: 89.7%, IFEval: 89.6%, ZebraLogic: 89.3%, HumanEval: 88.4%.
LongCat-Flash-Chat is 1.7x cheaper for input tokens. DeepSeek-R1-0528 costs $0.50/M input and $2.15/M output via deepinfra. LongCat-Flash-Chat costs $0.30/M input and $1.20/M output via meituan.
DeepSeek-R1-0528 supports 131K tokens and LongCat-Flash-Chat supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (131K vs 128K), input pricing ($0.50 vs $0.30/M). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-R1-0528 is developed by DeepSeek and LongCat-Flash-Chat is developed by Meituan.