Model Comparison

LongCat-Flash-Thinking vs Qwen2 72B Instruct

LongCat-Flash-Thinking significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

LongCat-Flash-Thinking outperforms in 2 benchmarks (GPQA, MMLU-Pro), while Qwen2 72B Instruct is better at 0 benchmarks.

LongCat-Flash-Thinking significantly outperforms across most benchmarks.

Sat Apr 18 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Sat Apr 18 2026 • llm-stats.com
Meituan
LongCat-Flash-Thinking
Input tokens$0.30
Output tokens$1.20
Best providerMeituan
Alibaba Cloud / Qwen Team
Qwen2 72B Instruct
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

488.0B diff

LongCat-Flash-Thinking has 488.0B more parameters than Qwen2 72B Instruct, making it 677.8% larger.

Meituan
LongCat-Flash-Thinking
560.0Bparameters
Alibaba Cloud / Qwen Team
Qwen2 72B Instruct
72.0Bparameters
560.0B
LongCat-Flash-Thinking
72.0B
Qwen2 72B Instruct

Context Window

Maximum input and output token capacity

Only LongCat-Flash-Thinking specifies input context (128,000 tokens). Only LongCat-Flash-Thinking specifies output context (128,000 tokens).

Meituan
LongCat-Flash-Thinking
Input128,000 tokens
Output128,000 tokens
Alibaba Cloud / Qwen Team
Qwen2 72B Instruct
Input- tokens
Output- tokens
Sat Apr 18 2026 • llm-stats.com

License

Usage and distribution terms

LongCat-Flash-Thinking is licensed under MIT, while Qwen2 72B Instruct uses tongyi-qianwen.

License differences may affect how you can use these models in commercial or open-source projects.

LongCat-Flash-Thinking

MIT

Open weights

Qwen2 72B Instruct

tongyi-qianwen

Open weights

Release Timeline

When each model was launched

LongCat-Flash-Thinking was released on 2025-09-22, while Qwen2 72B Instruct was released on 2024-07-23.

LongCat-Flash-Thinking is 14 months newer than Qwen2 72B Instruct.

LongCat-Flash-Thinking

Sep 22, 2025

6 months ago

1.2yr newer
Qwen2 72B Instruct

Jul 23, 2024

1.7 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (128,000 tokens)
Higher GPQA score (81.5% vs 42.4%)
Higher MMLU-Pro score (82.6% vs 64.4%)
Alibaba Cloud / Qwen Team

Qwen2 72B Instruct

View details

Alibaba Cloud / Qwen Team

Detailed Comparison

AI Model Comparison Table
Feature
Meituan
LongCat-Flash-Thinking
Alibaba Cloud / Qwen Team
Qwen2 72B Instruct

FAQ

Common questions about LongCat-Flash-Thinking vs Qwen2 72B Instruct

LongCat-Flash-Thinking significantly outperforms across most benchmarks. LongCat-Flash-Thinking is made by Meituan and Qwen2 72B Instruct is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
LongCat-Flash-Thinking scores MATH-500: 99.2%, ZebraLogic: 95.5%, AIME 2024: 93.3%, AIME 2025: 90.6%, MMLU-Redux: 89.3%. Qwen2 72B Instruct scores GSM8k: 91.1%, CMMLU: 90.1%, HellaSwag: 87.6%, HumanEval: 86.0%, Winogrande: 85.1%.
LongCat-Flash-Thinking supports 128K tokens and Qwen2 72B Instruct supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (MIT vs tongyi-qianwen). See the full comparison above for benchmark-by-benchmark results.
LongCat-Flash-Thinking is developed by Meituan and Qwen2 72B Instruct is developed by Alibaba Cloud / Qwen Team.