Model Comparison

Qwen2 72B Instruct vs Qwen3-Next-80B-A3B-Thinking

Qwen3-Next-80B-A3B-Thinking significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

Qwen2 72B Instruct outperforms in 0 benchmarks, while Qwen3-Next-80B-A3B-Thinking is better at 2 benchmarks (GPQA, MMLU-Pro).

Qwen3-Next-80B-A3B-Thinking significantly outperforms across most benchmarks.

Sat May 09 2026 • llm-stats.com

Arena Performance

Human preference votes

Model Size

Parameter count comparison

8.0B diff

Qwen3-Next-80B-A3B-Thinking has 8.0B more parameters than Qwen2 72B Instruct, making it 11.1% larger.

Alibaba Cloud / Qwen Team
Qwen2 72B Instruct
72.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Thinking
80.0Bparameters
72.0B
Qwen2 72B Instruct
80.0B
Qwen3-Next-80B-A3B-Thinking

Context Window

Maximum input and output token capacity

Only Qwen3-Next-80B-A3B-Thinking specifies input context (65,536 tokens). Only Qwen3-Next-80B-A3B-Thinking specifies output context (65,536 tokens).

Alibaba Cloud / Qwen Team
Qwen2 72B Instruct
Input- tokens
Output- tokens
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Thinking
Input65,536 tokens
Output65,536 tokens
Sat May 09 2026 • llm-stats.com

License

Usage and distribution terms

Qwen2 72B Instruct is licensed under tongyi-qianwen, while Qwen3-Next-80B-A3B-Thinking uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Qwen2 72B Instruct

tongyi-qianwen

Open weights

Qwen3-Next-80B-A3B-Thinking

Apache 2.0

Open weights

Release Timeline

When each model was launched

Qwen2 72B Instruct was released on 2024-07-23, while Qwen3-Next-80B-A3B-Thinking was released on 2025-09-10.

Qwen3-Next-80B-A3B-Thinking is 14 months newer than Qwen2 72B Instruct.

Qwen2 72B Instruct

Jul 23, 2024

1.8 years ago

Qwen3-Next-80B-A3B-Thinking

Sep 10, 2025

8 months ago

1.1yr newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Alibaba Cloud / Qwen Team

Qwen2 72B Instruct

View details

Alibaba Cloud / Qwen Team

No standout differentiators in the data we have for this pair.

Larger context window (65,536 tokens)
Higher GPQA score (77.2% vs 42.4%)
Higher MMLU-Pro score (82.7% vs 64.4%)

Detailed Comparison

AI Model Comparison Table
Feature
Alibaba Cloud / Qwen Team
Qwen2 72B Instruct
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Thinking

FAQ

Common questions about Qwen2 72B Instruct vs Qwen3-Next-80B-A3B-Thinking.

Which is better, Qwen2 72B Instruct or Qwen3-Next-80B-A3B-Thinking?

Qwen3-Next-80B-A3B-Thinking significantly outperforms across most benchmarks. Qwen2 72B Instruct is made by Alibaba Cloud / Qwen Team and Qwen3-Next-80B-A3B-Thinking is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does Qwen2 72B Instruct compare to Qwen3-Next-80B-A3B-Thinking in benchmarks?

Qwen2 72B Instruct scores GSM8k: 91.1%, CMMLU: 90.1%, HellaSwag: 87.6%, HumanEval: 86.0%, Winogrande: 85.1%. Qwen3-Next-80B-A3B-Thinking scores MMLU-Redux: 92.5%, IFEval: 88.9%, AIME 2025: 87.8%, WritingBench: 84.6%, MMLU-Pro: 82.7%.

What are the context window sizes for Qwen2 72B Instruct and Qwen3-Next-80B-A3B-Thinking?

Qwen2 72B Instruct supports an unknown number of tokens and Qwen3-Next-80B-A3B-Thinking supports 66K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

What are the main differences between Qwen2 72B Instruct and Qwen3-Next-80B-A3B-Thinking?

Key differences include licensing (tongyi-qianwen vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.