Model Comparison

Qwen3-235B-A22B-Thinking-2507 vs Qwen3-Next-80B-A3B-Instruct

Qwen3-235B-A22B-Thinking-2507 significantly outperforms across most benchmarks. Qwen3-Next-80B-A3B-Instruct is 2.0x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

22 benchmarks

Qwen3-235B-A22B-Thinking-2507 outperforms in 21 benchmarks (AIME 2025, BFCL-v3, Creative Writing v3, GPQA, HMMT25, IFEval, Include, LiveBench 20241125, LiveCodeBench v6, MMLU-Pro, MMLU-ProX, MMLU-Redux, Multi-IF, PolyMATH, SuperGPQA, Tau2 Airline, Tau2 Retail, Tau2 Telecom, TAU-bench Airline, TAU-bench Retail, WritingBench), while Qwen3-Next-80B-A3B-Instruct is better at 1 benchmark (Arena-Hard v2).

Qwen3-235B-A22B-Thinking-2507 significantly outperforms across most benchmarks.

Sun Apr 26 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Qwen3-Next-80B-A3B-Instruct costs less

For input processing, Qwen3-235B-A22B-Thinking-2507 ($0.30/1M tokens) is 2.0x more expensive than Qwen3-Next-80B-A3B-Instruct ($0.15/1M tokens).

For output processing, Qwen3-235B-A22B-Thinking-2507 ($3.00/1M tokens) is 2.0x more expensive than Qwen3-Next-80B-A3B-Instruct ($1.50/1M tokens).

In conclusion, Qwen3-235B-A22B-Thinking-2507 is more expensive than Qwen3-Next-80B-A3B-Instruct.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Sun Apr 26 2026 • llm-stats.com
Alibaba Cloud / Qwen Team
Qwen3-235B-A22B-Thinking-2507
Input tokens$0.30
Output tokens$3.00
Best providerFireworks
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Instruct
Input tokens$0.15
Output tokens$1.50
Best providerNovita
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

155.0B diff

Qwen3-235B-A22B-Thinking-2507 has 155.0B more parameters than Qwen3-Next-80B-A3B-Instruct, making it 193.8% larger.

Alibaba Cloud / Qwen Team
Qwen3-235B-A22B-Thinking-2507
235.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Instruct
80.0Bparameters
235.0B
Qwen3-235B-A22B-Thinking-2507
80.0B
Qwen3-Next-80B-A3B-Instruct

Context Window

Maximum input and output token capacity

Qwen3-235B-A22B-Thinking-2507 accepts 262,144 input tokens compared to Qwen3-Next-80B-A3B-Instruct's 65,536 tokens. Qwen3-235B-A22B-Thinking-2507 can generate longer responses up to 131,072 tokens, while Qwen3-Next-80B-A3B-Instruct is limited to 65,536 tokens.

Alibaba Cloud / Qwen Team
Qwen3-235B-A22B-Thinking-2507
Input262,144 tokens
Output131,072 tokens
Alibaba Cloud / Qwen Team
Qwen3-Next-80B-A3B-Instruct
Input65,536 tokens
Output65,536 tokens
Sun Apr 26 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under Apache 2.0.

Both models share the same licensing terms, providing consistent usage rights.

Qwen3-235B-A22B-Thinking-2507

Apache 2.0

Open weights

Qwen3-Next-80B-A3B-Instruct

Apache 2.0

Open weights

Release Timeline

When each model was launched

Qwen3-235B-A22B-Thinking-2507 was released on 2025-07-25, while Qwen3-Next-80B-A3B-Instruct was released on 2025-09-10.

Qwen3-Next-80B-A3B-Instruct is 2 months newer than Qwen3-235B-A22B-Thinking-2507.

Qwen3-235B-A22B-Thinking-2507

Jul 25, 2025

9 months ago

Qwen3-Next-80B-A3B-Instruct

Sep 10, 2025

7 months ago

1mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Qwen3-235B-A22B-Thinking-2507 is available from Fireworks, Novita. Qwen3-Next-80B-A3B-Instruct is available from Novita.

Qwen3-235B-A22B-Thinking-2507

fireworks logo
Fireworks
Input Price:Input: $0.30/1MOutput Price:Output: $3.00/1M
novita logo
Novita
Input Price:Input: $0.30/1MOutput Price:Output: $3.00/1M

Qwen3-Next-80B-A3B-Instruct

novita logo
Novita
Input Price:Input: $0.15/1MOutput Price:Output: $1.50/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (262,144 tokens)
Higher AIME 2025 score (92.3% vs 69.5%)
Higher BFCL-v3 score (71.9% vs 70.3%)
Higher Creative Writing v3 score (86.1% vs 85.3%)
Higher GPQA score (81.1% vs 72.9%)
Higher HMMT25 score (83.9% vs 54.1%)
Higher IFEval score (87.8% vs 87.6%)
Higher Include score (81.0% vs 78.9%)
Higher LiveBench 20241125 score (78.4% vs 75.8%)
Higher LiveCodeBench v6 score (74.1% vs 56.6%)
Higher MMLU-Pro score (84.4% vs 80.6%)
Higher MMLU-ProX score (81.0% vs 76.7%)
Higher MMLU-Redux score (93.8% vs 90.9%)
Higher Multi-IF score (80.6% vs 75.8%)
Higher PolyMATH score (60.1% vs 45.9%)
Higher SuperGPQA score (64.9% vs 58.8%)
Higher Tau2 Airline score (58.0% vs 45.5%)
Higher Tau2 Retail score (71.9% vs 57.3%)
Higher Tau2 Telecom score (45.6% vs 13.2%)
Higher TAU-bench Airline score (46.0% vs 44.0%)
Higher TAU-bench Retail score (67.8% vs 60.9%)
Higher WritingBench score (88.3% vs 87.3%)
Less expensive input tokens
Less expensive output tokens
Higher Arena-Hard v2 score (82.7% vs 79.7%)

Detailed Comparison

FAQ

Common questions about Qwen3-235B-A22B-Thinking-2507 vs Qwen3-Next-80B-A3B-Instruct

Qwen3-235B-A22B-Thinking-2507 significantly outperforms across most benchmarks. Qwen3-235B-A22B-Thinking-2507 is made by Alibaba Cloud / Qwen Team and Qwen3-Next-80B-A3B-Instruct is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Qwen3-235B-A22B-Thinking-2507 scores MMLU-Redux: 93.8%, AIME 2025: 92.3%, WritingBench: 88.3%, IFEval: 87.8%, Creative Writing v3: 86.1%. Qwen3-Next-80B-A3B-Instruct scores MMLU-Redux: 90.9%, MultiPL-E: 87.8%, IFEval: 87.6%, WritingBench: 87.3%, Creative Writing v3: 85.3%.
Qwen3-Next-80B-A3B-Instruct is 2.0x cheaper for input tokens. Qwen3-235B-A22B-Thinking-2507 costs $0.30/M input and $3.00/M output via fireworks. Qwen3-Next-80B-A3B-Instruct costs $0.15/M input and $1.50/M output via novita.
Qwen3-235B-A22B-Thinking-2507 supports 262K tokens and Qwen3-Next-80B-A3B-Instruct supports 66K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (262K vs 66K), input pricing ($0.30 vs $0.15/M). See the full comparison above for benchmark-by-benchmark results.