DeepSeek-V3.2 (Non-thinking) vs Qwen2 72B Instruct Comparison
Comparing DeepSeek-V3.2 (Non-thinking) and Qwen2 72B Instruct across benchmarks, pricing, and capabilities.
Performance Benchmarks
Comparative analysis across standard metrics
DeepSeek-V3.2 (Non-thinking) and Qwen2 72B Instruct don't have any common benchmark datasets to compare. They may have been evaluated on different testing suites.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
Cost data unavailable.
Model Size
Parameter count comparison
DeepSeek-V3.2 (Non-thinking) has 613.0B more parameters than Qwen2 72B Instruct, making it 851.4% larger.
Context Window
Maximum input and output token capacity
Only DeepSeek-V3.2 (Non-thinking) specifies input context (131,072 tokens). Only DeepSeek-V3.2 (Non-thinking) specifies output context (8,192 tokens).
License
Usage and distribution terms
DeepSeek-V3.2 (Non-thinking) is licensed under MIT, while Qwen2 72B Instruct uses tongyi-qianwen.
License differences may affect how you can use these models in commercial or open-source projects.
MIT
Open weights
tongyi-qianwen
Open weights
Release Timeline
When each model was launched
DeepSeek-V3.2 (Non-thinking) was released on 2025-12-01, while Qwen2 72B Instruct was released on 2024-07-23.
DeepSeek-V3.2 (Non-thinking) is 17 months newer than Qwen2 72B Instruct.
Dec 1, 2025
3 months ago
1.4yr newerJul 23, 2024
1.6 years ago
Knowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Outputs Comparison
Key Takeaways
Qwen2 72B Instruct
View detailsAlibaba Cloud / Qwen Team
Detailed Comparison
| Feature |
|---|