Model Comparison

Claude Opus 4 vs GPT-5 Codex

GPT-5 Codex significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

Claude Opus 4 outperforms in 0 benchmarks, while GPT-5 Codex is better at 1 benchmark (SWE-Bench Verified).

GPT-5 Codex significantly outperforms across most benchmarks.

Sat May 16 2026 • llm-stats.com

Arena Performance

Human preference votes

Context Window

Maximum input and output token capacity

Only Claude Opus 4 specifies input context (200,000 tokens). Only Claude Opus 4 specifies output context (32,000 tokens).

Anthropic
Claude Opus 4
Input200,000 tokens
Output32,000 tokens
OpenAI
GPT-5 Codex
Input- tokens
Output- tokens
Sat May 16 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Claude Opus 4 supports multimodal inputs, whereas GPT-5 Codex does not.

Claude Opus 4 can handle both text and other forms of data like images, making it suitable for multimodal applications.

Claude Opus 4

Text
Images
Audio
Video

GPT-5 Codex

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under proprietary licenses.

Both models have usage restrictions defined by their respective organizations.

Claude Opus 4

Proprietary

Closed source

GPT-5 Codex

Proprietary

Closed source

Release Timeline

When each model was launched

Claude Opus 4 was released on 2025-05-22, while GPT-5 Codex was released on 2025-09-15.

GPT-5 Codex is 4 months newer than Claude Opus 4.

Claude Opus 4

May 22, 2025

11 months ago

GPT-5 Codex

Sep 15, 2025

8 months ago

3mo newer

Knowledge Cutoff

When training data ends

GPT-5 Codex has a documented knowledge cutoff of 2024-09-30, while Claude Opus 4's cutoff date is not specified.

We can confirm GPT-5 Codex's training data extends to 2024-09-30, but cannot make a direct comparison without Claude Opus 4's cutoff date.

Claude Opus 4

GPT-5 Codex

Sep 2024

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (200,000 tokens)
Supports multimodal inputs
Higher SWE-Bench Verified score (74.5% vs 72.5%)
AnthropicClaude Opus 4
OpenAIGPT-5 Codex

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude Opus 4
OpenAI
GPT-5 Codex

FAQ

Common questions about Claude Opus 4 vs GPT-5 Codex.

Which is better, Claude Opus 4 or GPT-5 Codex?

GPT-5 Codex significantly outperforms across most benchmarks. Claude Opus 4 is made by Anthropic and GPT-5 Codex is made by OpenAI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does Claude Opus 4 compare to GPT-5 Codex in benchmarks?

Claude Opus 4 scores MMMLU: 88.8%, TAU-bench Retail: 81.4%, GPQA: 79.6%, MMMU (validation): 76.5%, AIME 2025: 75.5%. GPT-5 Codex scores SWE-Bench Verified: 74.5%.

What are the context window sizes for Claude Opus 4 and GPT-5 Codex?

Claude Opus 4 supports 200K tokens and GPT-5 Codex supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

What are the main differences between Claude Opus 4 and GPT-5 Codex?

Key differences include multimodal support (yes vs no). See the full comparison above for benchmark-by-benchmark results.

Who makes Claude Opus 4 and GPT-5 Codex?

Claude Opus 4 is developed by Anthropic and GPT-5 Codex is developed by OpenAI.