Gemma 2 27B vs GPT-3.5 Turbo Comparison

Comparing Gemma 2 27B and GPT-3.5 Turbo across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

Gemma 2 27B outperforms in 1 benchmarks (MMLU), while GPT-3.5 Turbo is better at 2 benchmarks (HumanEval, MATH).

GPT-3.5 Turbo shows notably better performance in the majority of benchmarks.

Mon Mar 23 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Mon Mar 23 2026 • llm-stats.com
Google
Gemma 2 27B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
OpenAI
GPT-3.5 Turbo
Input tokens$0.50
Output tokens$1.50
Best providerAzure
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only GPT-3.5 Turbo specifies input context (16,385 tokens). Only GPT-3.5 Turbo specifies output context (4,096 tokens).

Google
Gemma 2 27B
Input- tokens
Output- tokens
OpenAI
GPT-3.5 Turbo
Input16,385 tokens
Output4,096 tokens
Mon Mar 23 2026 • llm-stats.com

License

Usage and distribution terms

Gemma 2 27B is licensed under Gemma, while GPT-3.5 Turbo uses a proprietary license.

License differences may affect how you can use these models in commercial or open-source projects.

Gemma 2 27B

Gemma

Open weights

GPT-3.5 Turbo

Proprietary

Closed source

Release Timeline

When each model was launched

Gemma 2 27B was released on 2024-06-27, while GPT-3.5 Turbo was released on 2023-03-21.

Gemma 2 27B is 15 months newer than GPT-3.5 Turbo.

Gemma 2 27B

Jun 27, 2024

1.7 years ago

1.3yr newer
GPT-3.5 Turbo

Mar 21, 2023

3.0 years ago

Knowledge Cutoff

When training data ends

GPT-3.5 Turbo has a documented knowledge cutoff of 2021-09-30, while Gemma 2 27B's cutoff date is not specified.

We can confirm GPT-3.5 Turbo's training data extends to 2021-09-30, but cannot make a direct comparison without Gemma 2 27B's cutoff date.

Gemma 2 27B

GPT-3.5 Turbo

Sep 2021

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Has open weights
Higher MMLU score (75.2% vs 69.8%)
Larger context window (16,385 tokens)
Higher HumanEval score (68.0% vs 51.8%)
Higher MATH score (43.1% vs 42.3%)

Detailed Comparison

AI Model Comparison Table
Feature
Google
Gemma 2 27B
OpenAI
GPT-3.5 Turbo