Grok-1.5 vs Llama 3.2 3B Instruct Comparison

Comparing Grok-1.5 and Llama 3.2 3B Instruct across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

Grok-1.5 outperforms in 4 benchmarks (GPQA, GSM8k, MATH, MMLU), while Llama 3.2 3B Instruct is better at 0 benchmarks.

Grok-1.5 significantly outperforms across most benchmarks.

Thu Mar 19 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Thu Mar 19 2026 • llm-stats.com
xAI
Grok-1.5
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Meta
Llama 3.2 3B Instruct
Input tokens$0.01
Output tokens$0.02
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only Llama 3.2 3B Instruct specifies input context (128,000 tokens). Only Llama 3.2 3B Instruct specifies output context (128,000 tokens).

xAI
Grok-1.5
Input- tokens
Output- tokens
Meta
Llama 3.2 3B Instruct
Input128,000 tokens
Output128,000 tokens
Thu Mar 19 2026 • llm-stats.com

License

Usage and distribution terms

Grok-1.5 is licensed under a proprietary license, while Llama 3.2 3B Instruct uses Llama 3.2 Community License.

License differences may affect how you can use these models in commercial or open-source projects.

Grok-1.5

Proprietary

Closed source

Llama 3.2 3B Instruct

Llama 3.2 Community License

Open weights

Release Timeline

When each model was launched

Grok-1.5 was released on 2024-03-28, while Llama 3.2 3B Instruct was released on 2024-09-25.

Llama 3.2 3B Instruct is 6 months newer than Grok-1.5.

Grok-1.5

Mar 28, 2024

2.0 years ago

Llama 3.2 3B Instruct

Sep 25, 2024

1.5 years ago

6mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Higher GPQA score (35.9% vs 32.8%)
Higher GSM8k score (90.0% vs 77.7%)
Higher MATH score (50.6% vs 48.0%)
Higher MMLU score (81.3% vs 63.4%)
Larger context window (128,000 tokens)
Has open weights

Detailed Comparison

AI Model Comparison Table
Feature
xAI
Grok-1.5
Meta
Llama 3.2 3B Instruct