Claude 3.5 Sonnet vs Qwen3.5-122B-A10B Comparison

Comparing Claude 3.5 Sonnet and Qwen3.5-122B-A10B across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

5 benchmarks

Claude 3.5 Sonnet outperforms in 1 benchmarks (AI2D), while Qwen3.5-122B-A10B is better at 4 benchmarks (GPQA, MMLU-Pro, MMMU, SWE-Bench Verified).

Qwen3.5-122B-A10B significantly outperforms across most benchmarks.

Sat Mar 14 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Qwen3.5-122B-A10B costs less

For input processing, Claude 3.5 Sonnet ($3.00/1M tokens) is 7.5x more expensive than Qwen3.5-122B-A10B ($0.40/1M tokens).

For output processing, Claude 3.5 Sonnet ($15.00/1M tokens) is 4.7x more expensive than Qwen3.5-122B-A10B ($3.20/1M tokens).

In conclusion, Claude 3.5 Sonnet is more expensive than Qwen3.5-122B-A10B.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Sat Mar 14 2026 • llm-stats.com
Anthropic
Claude 3.5 Sonnet
Input tokens$3.00
Output tokens$15.00
Best providerAnthropic
Alibaba Cloud / Qwen Team
Qwen3.5-122B-A10B
Input tokens$0.40
Output tokens$3.20
Best providerNovita
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Qwen3.5-122B-A10B accepts 262,144 input tokens compared to Claude 3.5 Sonnet's 200,000 tokens. Claude 3.5 Sonnet can generate longer responses up to 200,000 tokens, while Qwen3.5-122B-A10B is limited to 64,000 tokens.

Anthropic
Claude 3.5 Sonnet
Input200,000 tokens
Output200,000 tokens
Alibaba Cloud / Qwen Team
Qwen3.5-122B-A10B
Input262,144 tokens
Output64,000 tokens
Sat Mar 14 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both Claude 3.5 Sonnet and Qwen3.5-122B-A10B support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

Claude 3.5 Sonnet

Text
Images
Audio
Video

Qwen3.5-122B-A10B

Text
Images
Audio
Video

License

Usage and distribution terms

Claude 3.5 Sonnet is licensed under a proprietary license, while Qwen3.5-122B-A10B uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Claude 3.5 Sonnet

Proprietary

Closed source

Qwen3.5-122B-A10B

Apache 2.0

Open weights

Release Timeline

When each model was launched

Claude 3.5 Sonnet was released on 2024-10-22, while Qwen3.5-122B-A10B was released on 2026-02-24.

Qwen3.5-122B-A10B is 16 months newer than Claude 3.5 Sonnet.

Claude 3.5 Sonnet

Oct 22, 2024

1.4 years ago

Qwen3.5-122B-A10B

Feb 24, 2026

2 weeks ago

1.3yr newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

Claude 3.5 Sonnet is available from Anthropic, Bedrock, Google. Qwen3.5-122B-A10B is available from Novita. The availability of providers can affect quality of the model and reliability.

Claude 3.5 Sonnet

anthropic logo
Anthropic
Input Price:Input: $3.00/1MOutput Price:Output: $15.00/1M
bedrock logo
AWS Bedrock
Input Price:Input: $3.00/1MOutput Price:Output: $15.00/1M
google logo
Google
Input Price:Input: $3.00/1MOutput Price:Output: $15.00/1M

Qwen3.5-122B-A10B

novita logo
Novita
Input Price:Input: $0.40/1MOutput Price:Output: $3.20/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher AI2D score (94.7% vs 93.3%)
Alibaba Cloud / Qwen Team

Qwen3.5-122B-A10B

View details

Alibaba Cloud / Qwen Team

Larger context window (262,144 tokens)
Less expensive input tokens
Less expensive output tokens
Has open weights
Higher GPQA score (86.6% vs 67.2%)
Higher MMLU-Pro score (86.7% vs 77.6%)
Higher MMMU score (83.9% vs 68.3%)
Higher SWE-Bench Verified score (72.0% vs 49.0%)

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude 3.5 Sonnet
Alibaba Cloud / Qwen Team
Qwen3.5-122B-A10B