Model Comparison

Qwen3.5-397B-A17B vs LongCat-Flash-Thinking-2601

Qwen3.5-397B-A17B significantly outperforms across most benchmarks. LongCat-Flash-Thinking-2601 is 2.6x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

6 benchmarks

Qwen3.5-397B-A17B outperforms in 6 benchmarks (BrowseComp, BrowseComp-zh, GPQA, Humanity's Last Exam, IMO-AnswerBench, SWE-Bench Verified), while LongCat-Flash-Thinking-2601 is better at 0 benchmarks.

Qwen3.5-397B-A17B significantly outperforms across most benchmarks.

Thu Apr 16 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

LongCat-Flash-Thinking-2601 costs less

For input processing, Qwen3.5-397B-A17B ($0.60/1M tokens) is 2.0x more expensive than LongCat-Flash-Thinking-2601 ($0.30/1M tokens).

For output processing, Qwen3.5-397B-A17B ($3.60/1M tokens) is 3.0x more expensive than LongCat-Flash-Thinking-2601 ($1.20/1M tokens).

In conclusion, Qwen3.5-397B-A17B is more expensive than LongCat-Flash-Thinking-2601.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Thu Apr 16 2026 • llm-stats.com
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input tokens$0.60
Output tokens$3.60
Best providerNovita
Meituan
LongCat-Flash-Thinking-2601
Input tokens$0.30
Output tokens$1.20
Best providerMeituan
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

163.0B diff

LongCat-Flash-Thinking-2601 has 163.0B more parameters than Qwen3.5-397B-A17B, making it 41.1% larger.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
397.0Bparameters
Meituan
LongCat-Flash-Thinking-2601
560.0Bparameters
397.0B
Qwen3.5-397B-A17B
560.0B
LongCat-Flash-Thinking-2601

Context Window

Maximum input and output token capacity

Qwen3.5-397B-A17B accepts 262,144 input tokens compared to LongCat-Flash-Thinking-2601's 128,000 tokens. LongCat-Flash-Thinking-2601 can generate longer responses up to 128,000 tokens, while Qwen3.5-397B-A17B is limited to 64,000 tokens.

Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Input262,144 tokens
Output64,000 tokens
Meituan
LongCat-Flash-Thinking-2601
Input128,000 tokens
Output128,000 tokens
Thu Apr 16 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Qwen3.5-397B-A17B supports multimodal inputs, whereas LongCat-Flash-Thinking-2601 does not.

Qwen3.5-397B-A17B can handle both text and other forms of data like images, making it suitable for multimodal applications.

Qwen3.5-397B-A17B

Text
Images
Audio
Video

LongCat-Flash-Thinking-2601

Text
Images
Audio
Video

License

Usage and distribution terms

Qwen3.5-397B-A17B is licensed under Apache 2.0, while LongCat-Flash-Thinking-2601 uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Qwen3.5-397B-A17B

Apache 2.0

Open weights

LongCat-Flash-Thinking-2601

MIT

Open weights

Release Timeline

When each model was launched

Qwen3.5-397B-A17B was released on 2026-02-16, while LongCat-Flash-Thinking-2601 was released on 2026-01-14.

Qwen3.5-397B-A17B is 1 month newer than LongCat-Flash-Thinking-2601.

Qwen3.5-397B-A17B

Feb 16, 2026

1 months ago

1mo newer
LongCat-Flash-Thinking-2601

Jan 14, 2026

3 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Qwen3.5-397B-A17B is available from Novita. LongCat-Flash-Thinking-2601 is available from Meituan.

Qwen3.5-397B-A17B

novita logo
Novita
Input Price:Input: $0.60/1MOutput Price:Output: $3.60/1M

LongCat-Flash-Thinking-2601

meituan logo
Meituan
Input Price:Input: $0.30/1MOutput Price:Output: $1.20/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Alibaba Cloud / Qwen Team

Qwen3.5-397B-A17B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Supports multimodal inputs
Higher BrowseComp score (69.0% vs 56.6%)
Higher BrowseComp-zh score (70.3% vs 69.0%)
Higher GPQA score (88.4% vs 80.5%)
Higher Humanity's Last Exam score (28.7% vs 25.2%)
Higher IMO-AnswerBench score (80.9% vs 78.6%)
Higher SWE-Bench Verified score (76.4% vs 70.0%)
Less expensive input tokens
Less expensive output tokens

Detailed Comparison

AI Model Comparison Table
Feature
Alibaba Cloud / Qwen Team
Qwen3.5-397B-A17B
Meituan
LongCat-Flash-Thinking-2601

FAQ

Common questions about Qwen3.5-397B-A17B vs LongCat-Flash-Thinking-2601

Qwen3.5-397B-A17B significantly outperforms across most benchmarks. Qwen3.5-397B-A17B is made by Alibaba Cloud / Qwen Team and LongCat-Flash-Thinking-2601 is made by Meituan. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Qwen3.5-397B-A17B scores MMLU-Redux: 94.9%, HMMT 2025: 94.8%, C-Eval: 93.0%, HMMT25: 92.7%, IFEval: 92.6%. LongCat-Flash-Thinking-2601 scores AIME 2025: 99.6%, Tau2 Telecom: 99.3%, Tau2 Retail: 88.6%, LiveCodeBench: 82.8%, GPQA: 80.5%.
LongCat-Flash-Thinking-2601 is 2.0x cheaper for input tokens. Qwen3.5-397B-A17B costs $0.60/M input and $3.60/M output via novita. LongCat-Flash-Thinking-2601 costs $0.30/M input and $1.20/M output via meituan.
Qwen3.5-397B-A17B supports 262K tokens and LongCat-Flash-Thinking-2601 supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (262K vs 128K), input pricing ($0.60 vs $0.30/M), multimodal support (yes vs no), licensing (Apache 2.0 vs MIT). See the full comparison above for benchmark-by-benchmark results.
Qwen3.5-397B-A17B is developed by Alibaba Cloud / Qwen Team and LongCat-Flash-Thinking-2601 is developed by Meituan.