Model Comparison

o1 vs o1-preview

o1 shows notably better performance in the majority of benchmarks. o1 and o1-preview cost the same.

Performance Benchmarks

Comparative analysis across standard metrics

8 benchmarks

o1 outperforms in 6 benchmarks (AIME 2024, GPQA, LiveBench, MATH, MMLU, SimpleQA), while o1-preview is better at 2 benchmarks (MGSM, SWE-Bench Verified).

o1 shows notably better performance in the majority of benchmarks.

Sat May 02 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

For input processing, o1 ($15.00/1M tokens) costs the same as o1-preview ($15.00/1M tokens).

For output processing, o1 ($60.00/1M tokens) costs the same as o1-preview ($60.00/1M tokens).

In conclusion, o1 and o1-preview cost the same.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Sat May 02 2026 • llm-stats.com
OpenAI
o1
Input tokens$15.00
Output tokens$60.00
Best providerAzure
OpenAI
o1-preview
Input tokens$15.00
Output tokens$60.00
Best providerOpenAI
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

o1 accepts 200,000 input tokens compared to o1-preview's 128,000 tokens. o1 can generate longer responses up to 100,000 tokens, while o1-preview is limited to 32,768 tokens.

OpenAI
o1
Input200,000 tokens
Output100,000 tokens
OpenAI
o1-preview
Input128,000 tokens
Output32,768 tokens
Sat May 02 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under proprietary licenses.

Both models have usage restrictions defined by their respective organizations.

o1

Proprietary

Closed source

o1-preview

Proprietary

Closed source

Release Timeline

When each model was launched

o1 was released on 2024-12-17, while o1-preview was released on 2024-09-12.

o1 is 3 months newer than o1-preview.

o1

Dec 17, 2024

1.4 years ago

3mo newer
o1-preview

Sep 12, 2024

1.6 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

o1 is available from Azure, OpenAI. o1-preview is available from OpenAI, Azure.

o1

azure logo
Azure
Input Price:Input: $15.00/1MOutput Price:Output: $60.00/1M
openai logo
OpenAI
Input Price:Input: $15.00/1MOutput Price:Output: $60.00/1M

o1-preview

openai logo
OpenAI
Input Price:Input: $15.00/1MOutput Price:Output: $60.00/1M
azure logo
Azure
Input Price:Input: $16.50/1MOutput Price:Output: $66.00/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (200,000 tokens)
Higher AIME 2024 score (74.3% vs 42.0%)
Higher GPQA score (78.0% vs 73.3%)
Higher LiveBench score (67.0% vs 52.3%)
Higher MATH score (96.4% vs 85.5%)
Higher MMLU score (91.8% vs 90.8%)
Higher SimpleQA score (47.0% vs 42.4%)
Higher MGSM score (90.8% vs 89.3%)
Higher SWE-Bench Verified score (41.3% vs 41.0%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
o1
OpenAI
o1-preview

FAQ

Common questions about o1 vs o1-preview.

Which is better, o1 or o1-preview?

o1 shows notably better performance in the majority of benchmarks. o1 is made by OpenAI and o1-preview is made by OpenAI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does o1 compare to o1-preview in benchmarks?

o1 scores GSM8k: 97.1%, MATH: 96.4%, GPQA Physics: 92.8%, MMLU: 91.8%, MGSM: 89.3%. o1-preview scores MGSM: 90.8%, MMLU: 90.8%, MATH: 85.5%, GPQA: 73.3%, LiveBench: 52.3%.

Is o1 cheaper than o1-preview?

Both models cost $15.00 per million input tokens.

What are the context window sizes for o1 and o1-preview?

o1 supports 200K tokens and o1-preview supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

What are the main differences between o1 and o1-preview?

Key differences include context window (200K vs 128K). See the full comparison above for benchmark-by-benchmark results.