o3-mini vs Sarvam-105B Comparison

Comparing o3-mini and Sarvam-105B across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

4 benchmarks

o3-mini outperforms in 2 benchmarks (IFEval, SWE-Bench Verified), while Sarvam-105B is better at 2 benchmarks (GPQA, MMLU).

Both models are evenly matched across the benchmarks.

Sat Mar 14 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Sat Mar 14 2026 • llm-stats.com
OpenAI
o3-mini
Input tokens$1.10
Output tokens$4.40
Best providerAzure
Sarvam AI
Sarvam-105B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only o3-mini specifies input context (200,000 tokens). Only o3-mini specifies output context (100,000 tokens).

OpenAI
o3-mini
Input200,000 tokens
Output100,000 tokens
Sarvam AI
Sarvam-105B
Input- tokens
Output- tokens
Sat Mar 14 2026 • llm-stats.com

License

Usage and distribution terms

o3-mini is licensed under a proprietary license, while Sarvam-105B uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

o3-mini

Proprietary

Closed source

Sarvam-105B

Apache 2.0

Open weights

Release Timeline

When each model was launched

o3-mini was released on 2025-01-30, while Sarvam-105B was released on 2026-03-06.

Sarvam-105B is 13 months newer than o3-mini.

o3-mini

Jan 30, 2025

1.1 years ago

Sarvam-105B

Mar 6, 2026

1 weeks ago

1.1yr newer

Knowledge Cutoff

When training data ends

o3-mini has a documented knowledge cutoff of 2023-09-30, while Sarvam-105B's cutoff date is not specified.

We can confirm o3-mini's training data extends to 2023-09-30, but cannot make a direct comparison without Sarvam-105B's cutoff date.

o3-mini

Sep 2023

Sarvam-105B

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (200,000 tokens)
Higher IFEval score (93.9% vs 84.8%)
Higher SWE-Bench Verified score (49.3% vs 45.0%)
Has open weights
Higher GPQA score (78.7% vs 77.2%)
Higher MMLU score (90.6% vs 86.9%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
o3-mini
Sarvam AI
Sarvam-105B