Model Comparison

Qwen3-Coder 480B A35B Instruct vs Qwen3-Next-80B-A3B-Thinking

Qwen3-Coder 480B A35B Instruct shows notably better performance in the majority of benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

Qwen3-Coder 480B A35B Instruct outperforms in 2 benchmarks (TAU-bench Airline, TAU-bench Retail), while Qwen3-Next-80B-A3B-Thinking is better at 1 benchmark (BFCL-v3).

Qwen3-Coder 480B A35B Instruct shows notably better performance in the majority of benchmarks.

Mon May 11 2026 • llm-stats.com

Arena Performance

Human preference votes

Model Size

Parameter count comparison

400.0B diff

Qwen3-Coder 480B A35B Instruct has 400.0B more parameters than Qwen3-Next-80B-A3B-Thinking, making it 500.0% larger.

Alibaba Cloud / Qwen Team
Qwen3-Coder 480B A35B Instruct
480.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Thinking
80.0Bparameters
480.0B
Qwen3-Coder 480B A35B Instruct
80.0B
Qwen3-Next-80B-A3B-Thinking

Context Window

Maximum input and output token capacity

Only Qwen3-Next-80B-A3B-Thinking specifies input context (65,536 tokens). Only Qwen3-Next-80B-A3B-Thinking specifies output context (65,536 tokens).

Alibaba Cloud / Qwen Team
Qwen3-Coder 480B A35B Instruct
Input- tokens
Output- tokens
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Thinking
Input65,536 tokens
Output65,536 tokens
Mon May 11 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under Apache 2.0.

Both models share the same licensing terms, providing consistent usage rights.

Qwen3-Coder 480B A35B Instruct

Apache 2.0

Open weights

Qwen3-Next-80B-A3B-Thinking

Apache 2.0

Open weights

Release Timeline

When each model was launched

Qwen3-Coder 480B A35B Instruct was released on 2025-01-31, while Qwen3-Next-80B-A3B-Thinking was released on 2025-09-10.

Qwen3-Next-80B-A3B-Thinking is 7 months newer than Qwen3-Coder 480B A35B Instruct.

Qwen3-Coder 480B A35B Instruct

Jan 31, 2025

1.3 years ago

Qwen3-Next-80B-A3B-Thinking

Sep 10, 2025

8 months ago

7mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher TAU-bench Airline score (60.0% vs 49.0%)
Higher TAU-bench Retail score (77.5% vs 69.6%)
Larger context window (65,536 tokens)
Higher BFCL-v3 score (72.0% vs 68.7%)

Detailed Comparison

FAQ

Common questions about Qwen3-Coder 480B A35B Instruct vs Qwen3-Next-80B-A3B-Thinking.

Which is better, Qwen3-Coder 480B A35B Instruct or Qwen3-Next-80B-A3B-Thinking?

Qwen3-Coder 480B A35B Instruct shows notably better performance in the majority of benchmarks. Qwen3-Coder 480B A35B Instruct is made by Alibaba Cloud / Qwen Team and Qwen3-Next-80B-A3B-Thinking is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does Qwen3-Coder 480B A35B Instruct compare to Qwen3-Next-80B-A3B-Thinking in benchmarks?

Qwen3-Coder 480B A35B Instruct scores TAU-bench Retail: 77.5%, SWE-Bench Verified: 69.6%, BFCL-v3: 68.7%, Aider-Polyglot: 61.8%, TAU-bench Airline: 60.0%. Qwen3-Next-80B-A3B-Thinking scores MMLU-Redux: 92.5%, IFEval: 88.9%, AIME 2025: 87.8%, WritingBench: 84.6%, MMLU-Pro: 82.7%.

What are the context window sizes for Qwen3-Coder 480B A35B Instruct and Qwen3-Next-80B-A3B-Thinking?

Qwen3-Coder 480B A35B Instruct supports an unknown number of tokens and Qwen3-Next-80B-A3B-Thinking supports 66K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.