Model Comparison

GPT-5.3 Codex vs QvQ-72B-Preview

Comparing GPT-5.3 Codex and QvQ-72B-Preview across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

No common benchmarks found

GPT-5.3 Codex and QvQ-72B-Preview don't have any common benchmark datasets to compare. They may have been evaluated on different testing suites.

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Tue Apr 14 2026 • llm-stats.com
OpenAI
GPT-5.3 Codex
Input tokens$1.75
Output tokens$14.00
Best providerOpenAI
Alibaba Cloud / Qwen Team
QvQ-72B-Preview
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only GPT-5.3 Codex specifies input context (400,000 tokens). Only GPT-5.3 Codex specifies output context (128,000 tokens).

OpenAI
GPT-5.3 Codex
Input400,000 tokens
Output128,000 tokens
Alibaba Cloud / Qwen Team
QvQ-72B-Preview
Input- tokens
Output- tokens
Tue Apr 14 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both GPT-5.3 Codex and QvQ-72B-Preview support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

GPT-5.3 Codex

Text
Images
Audio
Video

QvQ-72B-Preview

Text
Images
Audio
Video

License

Usage and distribution terms

GPT-5.3 Codex is licensed under a proprietary license, while QvQ-72B-Preview uses Qwen.

License differences may affect how you can use these models in commercial or open-source projects.

GPT-5.3 Codex

Proprietary

Closed source

QvQ-72B-Preview

Qwen

Open weights

Release Timeline

When each model was launched

GPT-5.3 Codex was released on 2026-02-05, while QvQ-72B-Preview was released on 2024-12-25.

GPT-5.3 Codex is 14 months newer than QvQ-72B-Preview.

GPT-5.3 Codex

Feb 5, 2026

2 months ago

1.1yr newer
QvQ-72B-Preview

Dec 25, 2024

1.3 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (400,000 tokens)
Alibaba Cloud / Qwen Team

QvQ-72B-Preview

View details

Alibaba Cloud / Qwen Team

Has open weights

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
GPT-5.3 Codex
Alibaba Cloud / Qwen Team
QvQ-72B-Preview

FAQ

Common questions about GPT-5.3 Codex vs QvQ-72B-Preview

GPT-5.3 Codex (OpenAI) and QvQ-72B-Preview (Alibaba Cloud / Qwen Team) each have strengths in different areas. Compare their benchmark scores, pricing, context windows, and capabilities above to determine which fits your needs.
GPT-5.3 Codex scores SWE-Lancer (IC-Diamond subset): 81.4%, Cybersecurity CTFs: 77.6%, Terminal-Bench 2.0: 77.3%, OSWorld-Verified: 64.7%, SWE-Bench Pro: 56.8%. QvQ-72B-Preview scores MathVista: 71.4%, MMMU: 70.3%, MathVision: 35.9%, OlympiadBench: 20.4%.
GPT-5.3 Codex supports 400K tokens and QvQ-72B-Preview supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (Proprietary vs Qwen). See the full comparison above for benchmark-by-benchmark results.
GPT-5.3 Codex is developed by OpenAI and QvQ-72B-Preview is developed by Alibaba Cloud / Qwen Team.