GPT-4 Turbo vs Grok-1.5 Comparison

Comparing GPT-4 Turbo and Grok-1.5 across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

GPT-4 Turbo outperforms in 4 benchmarks (GPQA, HumanEval, MATH, MMLU), while Grok-1.5 is better at 0 benchmarks.

GPT-4 Turbo significantly outperforms across most benchmarks.

Sat Mar 14 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Sat Mar 14 2026 • llm-stats.com
OpenAI
GPT-4 Turbo
Input tokens$10.00
Output tokens$30.00
Best providerAzure
xAI
Grok-1.5
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only GPT-4 Turbo specifies input context (128,000 tokens). Only GPT-4 Turbo specifies output context (4,096 tokens).

OpenAI
GPT-4 Turbo
Input128,000 tokens
Output4,096 tokens
xAI
Grok-1.5
Input- tokens
Output- tokens
Sat Mar 14 2026 • llm-stats.com

License

Usage and distribution terms

Both models are licensed under proprietary licenses.

Both models have usage restrictions defined by their respective organizations.

GPT-4 Turbo

Proprietary

Closed source

Grok-1.5

Proprietary

Closed source

Release Timeline

When each model was launched

GPT-4 Turbo was released on 2024-04-09, while Grok-1.5 was released on 2024-03-28.

GPT-4 Turbo is 0 month newer than Grok-1.5.

GPT-4 Turbo

Apr 9, 2024

1.9 years ago

1w newer
Grok-1.5

Mar 28, 2024

2.0 years ago

Knowledge Cutoff

When training data ends

GPT-4 Turbo has a documented knowledge cutoff of 2023-12-31, while Grok-1.5's cutoff date is not specified.

We can confirm GPT-4 Turbo's training data extends to 2023-12-31, but cannot make a direct comparison without Grok-1.5's cutoff date.

GPT-4 Turbo

Dec 2023

Grok-1.5

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (128,000 tokens)
Higher GPQA score (48.0% vs 35.9%)
Higher HumanEval score (87.1% vs 74.1%)
Higher MATH score (72.6% vs 50.6%)
Higher MMLU score (86.5% vs 81.3%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
GPT-4 Turbo
xAI
Grok-1.5