Model Comparison

DeepSeek-V3.2-Exp vs Qwen3 VL 235B A22B Thinking

DeepSeek-V3.2-Exp shows notably better performance in the majority of benchmarks. DeepSeek-V3.2-Exp is 4.0x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

DeepSeek-V3.2-Exp outperforms in 3 benchmarks (Humanity's Last Exam, MMLU-Pro, SimpleQA), while Qwen3 VL 235B A22B Thinking is better at 1 benchmark (AIME 2025).

DeepSeek-V3.2-Exp shows notably better performance in the majority of benchmarks.

Thu Apr 30 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

DeepSeek-V3.2-Exp costs less

For input processing, DeepSeek-V3.2-Exp ($0.27/1M tokens) is 1.7x cheaper than Qwen3 VL 235B A22B Thinking ($0.45/1M tokens).

For output processing, DeepSeek-V3.2-Exp ($0.41/1M tokens) is 8.5x cheaper than Qwen3 VL 235B A22B Thinking ($3.49/1M tokens).

In conclusion, Qwen3 VL 235B A22B Thinking is more expensive than DeepSeek-V3.2-Exp.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Thu Apr 30 2026 • llm-stats.com
DeepSeek
DeepSeek-V3.2-Exp
Input tokens$0.27
Output tokens$0.41
Best providerNovita
Alibaba Cloud / Qwen Team
Qwen3 VL 235B A22B Thinking
Input tokens$0.45
Output tokens$3.49
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

449.0B diff

DeepSeek-V3.2-Exp has 449.0B more parameters than Qwen3 VL 235B A22B Thinking, making it 190.3% larger.

DeepSeek
DeepSeek-V3.2-Exp
685.0Bparameters
Alibaba Cloud / Qwen Team
Qwen3 VL 235B A22B Thinking
236.0Bparameters
685.0B
DeepSeek-V3.2-Exp
236.0B
Qwen3 VL 235B A22B Thinking

Context Window

Maximum input and output token capacity

Qwen3 VL 235B A22B Thinking accepts 262,144 input tokens compared to DeepSeek-V3.2-Exp's 163,840 tokens. Qwen3 VL 235B A22B Thinking can generate longer responses up to 262,144 tokens, while DeepSeek-V3.2-Exp is limited to 65,536 tokens.

DeepSeek
DeepSeek-V3.2-Exp
Input163,840 tokens
Output65,536 tokens
Alibaba Cloud / Qwen Team
Qwen3 VL 235B A22B Thinking
Input262,144 tokens
Output262,144 tokens
Thu Apr 30 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Qwen3 VL 235B A22B Thinking supports multimodal inputs, whereas DeepSeek-V3.2-Exp does not.

Qwen3 VL 235B A22B Thinking can handle both text and other forms of data like images, making it suitable for multimodal applications.

DeepSeek-V3.2-Exp

Text
Images
Audio
Video

Qwen3 VL 235B A22B Thinking

Text
Images
Audio
Video

License

Usage and distribution terms

DeepSeek-V3.2-Exp is licensed under MIT, while Qwen3 VL 235B A22B Thinking uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

DeepSeek-V3.2-Exp

MIT

Open weights

Qwen3 VL 235B A22B Thinking

Apache 2.0

Open weights

Release Timeline

When each model was launched

DeepSeek-V3.2-Exp was released on 2025-09-29, while Qwen3 VL 235B A22B Thinking was released on 2025-09-22.

DeepSeek-V3.2-Exp is 0 month newer than Qwen3 VL 235B A22B Thinking.

DeepSeek-V3.2-Exp

Sep 29, 2025

7 months ago

1w newer
Qwen3 VL 235B A22B Thinking

Sep 22, 2025

7 months ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

DeepSeek-V3.2-Exp is available from Novita. Qwen3 VL 235B A22B Thinking is available from DeepInfra, Novita.

DeepSeek-V3.2-Exp

novita logo
Novita
Input Price:Input: $0.27/1MOutput Price:Output: $0.41/1M

Qwen3 VL 235B A22B Thinking

deepinfra logo
Deepinfra
Input Price:Input: $0.45/1MOutput Price:Output: $3.49/1M
novita logo
Novita
Input Price:Input: $0.98/1MOutput Price:Output: $3.95/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Less expensive input tokens
Less expensive output tokens
Higher Humanity's Last Exam score (19.8% vs 13.6%)
Higher MMLU-Pro score (85.0% vs 83.8%)
Higher SimpleQA score (97.1% vs 44.4%)
Larger context window (262,144 tokens)
Supports multimodal inputs
Higher AIME 2025 score (89.7% vs 89.3%)

Detailed Comparison

AI Model Comparison Table
Feature
DeepSeek
DeepSeek-V3.2-Exp
Alibaba Cloud / Qwen Team
Qwen3 VL 235B A22B Thinking

FAQ

Common questions about DeepSeek-V3.2-Exp vs Qwen3 VL 235B A22B Thinking

DeepSeek-V3.2-Exp shows notably better performance in the majority of benchmarks. DeepSeek-V3.2-Exp is made by DeepSeek and Qwen3 VL 235B A22B Thinking is made by Alibaba Cloud / Qwen Team. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
DeepSeek-V3.2-Exp scores SimpleQA: 97.1%, AIME 2025: 89.3%, MMLU-Pro: 85.0%, HMMT 2025: 83.6%, GPQA: 79.9%. Qwen3 VL 235B A22B Thinking scores ZebraLogic: 97.3%, DocVQAtest: 96.5%, ScreenSpot: 95.4%, CountBench: 93.7%, MMLU-Redux: 93.7%.
DeepSeek-V3.2-Exp is 1.7x cheaper for input tokens. DeepSeek-V3.2-Exp costs $0.27/M input and $0.41/M output via novita. Qwen3 VL 235B A22B Thinking costs $0.45/M input and $3.49/M output via deepinfra.
DeepSeek-V3.2-Exp supports 164K tokens and Qwen3 VL 235B A22B Thinking supports 262K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include context window (164K vs 262K), input pricing ($0.27 vs $0.45/M), multimodal support (no vs yes), licensing (MIT vs Apache 2.0). See the full comparison above for benchmark-by-benchmark results.
DeepSeek-V3.2-Exp is developed by DeepSeek and Qwen3 VL 235B A22B Thinking is developed by Alibaba Cloud / Qwen Team.