Model Comparison

Grok-1.5 vs Gemma 2 27B

Grok-1.5 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

Grok-1.5 outperforms in 4 benchmarks (GSM8k, HumanEval, MATH, MMLU), while Gemma 2 27B is better at 0 benchmarks.

Grok-1.5 significantly outperforms across most benchmarks.

Wed Apr 22 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 22 2026 • llm-stats.com
xAI
Grok-1.5
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Google
Gemma 2 27B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

License

Usage and distribution terms

Grok-1.5 is licensed under a proprietary license, while Gemma 2 27B uses Gemma.

License differences may affect how you can use these models in commercial or open-source projects.

Grok-1.5

Proprietary

Closed source

Gemma 2 27B

Gemma

Open weights

Release Timeline

When each model was launched

Grok-1.5 was released on 2024-03-28, while Gemma 2 27B was released on 2024-06-27.

Gemma 2 27B is 3 months newer than Grok-1.5.

Grok-1.5

Mar 28, 2024

2.1 years ago

Gemma 2 27B

Jun 27, 2024

1.8 years ago

3mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher GSM8k score (90.0% vs 74.0%)
Higher HumanEval score (74.1% vs 51.8%)
Higher MATH score (50.6% vs 42.3%)
Higher MMLU score (81.3% vs 75.2%)
Has open weights

Detailed Comparison

AI Model Comparison Table
Feature
xAI
Grok-1.5
Google
Gemma 2 27B

FAQ

Common questions about Grok-1.5 vs Gemma 2 27B

Grok-1.5 significantly outperforms across most benchmarks. Grok-1.5 is made by xAI and Gemma 2 27B is made by Google. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Grok-1.5 scores GSM8k: 90.0%, DocVQA: 85.6%, MMLU: 81.3%, HumanEval: 74.1%, MMMU: 53.6%. Gemma 2 27B scores ARC-E: 88.6%, HellaSwag: 86.4%, BoolQ: 84.8%, TriviaQA: 83.7%, Winogrande: 83.7%.
Key differences include licensing (Proprietary vs Gemma). See the full comparison above for benchmark-by-benchmark results.
Grok-1.5 is developed by xAI and Gemma 2 27B is developed by Google.