Model Comparison

o1-preview vs o1-pro

o1-pro significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

o1-preview outperforms in 0 benchmarks, while o1-pro is better at 2 benchmarks (AIME 2024, GPQA).

o1-pro significantly outperforms across most benchmarks.

Sat Apr 25 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Sat Apr 25 2026 • llm-stats.com
OpenAI
o1-preview
Input tokens$15.00
Output tokens$60.00
Best providerOpenAI
OpenAI
o1-pro
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only o1-preview specifies input context (128,000 tokens). Only o1-preview specifies output context (32,768 tokens).

OpenAI
o1-preview
Input128,000 tokens
Output32,768 tokens
OpenAI
o1-pro
Input- tokens
Output- tokens
Sat Apr 25 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

o1-pro supports multimodal inputs, whereas o1-preview does not.

o1-pro can handle both text and other forms of data like images, making it suitable for multimodal applications.

o1-preview

Text
Images
Audio
Video

o1-pro

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under proprietary licenses.

Both models have usage restrictions defined by their respective organizations.

o1-preview

Proprietary

Closed source

o1-pro

Proprietary

Closed source

Release Timeline

When each model was launched

o1-preview was released on 2024-09-12, while o1-pro was released on 2024-12-17.

o1-pro is 3 months newer than o1-preview.

o1-preview

Sep 12, 2024

1.6 years ago

o1-pro

Dec 17, 2024

1.4 years ago

3mo newer

Knowledge Cutoff

When training data ends

o1-pro has a documented knowledge cutoff of 2023-09-30, while o1-preview's cutoff date is not specified.

We can confirm o1-pro's training data extends to 2023-09-30, but cannot make a direct comparison without o1-preview's cutoff date.

o1-preview

o1-pro

Sep 2023

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (128,000 tokens)
Supports multimodal inputs
Higher AIME 2024 score (86.0% vs 42.0%)
Higher GPQA score (79.0% vs 73.3%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
o1-preview
OpenAI
o1-pro

FAQ

Common questions about o1-preview vs o1-pro

o1-pro significantly outperforms across most benchmarks. o1-preview is made by OpenAI and o1-pro is made by OpenAI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
o1-preview scores MGSM: 90.8%, MMLU: 90.8%, MATH: 85.5%, GPQA: 73.3%, LiveBench: 52.3%. o1-pro scores AIME 2024: 86.0%, GPQA: 79.0%.
o1-preview supports 128K tokens and o1-pro supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include multimodal support (no vs yes). See the full comparison above for benchmark-by-benchmark results.