Grok-1.5 vs o1-mini
Comparing Grok-1.5 by xAI and o1-mini by OpenAI across benchmarks, pricing, and capabilities.
o1-mini significantly outperforms across most benchmarks.
Performance Benchmarks
Comparative analysis across standard metrics
Grok-1.5 outperforms in 0 benchmarks, while o1-mini is better at 3 benchmarks (GPQA, HumanEval, MMLU).
o1-mini significantly outperforms across most benchmarks.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
Cost data unavailable.
Context Window
Maximum input and output token capacity
Only o1-mini specifies input context (128,000 tokens). Only o1-mini specifies output context (65,536 tokens).
License
Usage and distribution terms
Both models are licensed under proprietary licenses.
Both models have usage restrictions defined by their respective organizations.
Proprietary
Closed source
Proprietary
Closed source
Release Timeline
When each model was launched
Grok-1.5 was released on 2024-03-28, while o1-mini was released on 2024-09-12.
o1-mini is 6 months newer than Grok-1.5.
Mar 28, 2024
2.0 years ago
Sep 12, 2024
1.5 years ago
5mo newerKnowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Outputs Comparison
Key Takeaways
o1-mini
View detailsOpenAI