Claude Opus 4.6 vs Sarvam-105B Comparison

Comparing Claude Opus 4.6 and Sarvam-105B across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

5 benchmarks

Claude Opus 4.6 outperforms in 5 benchmarks (AIME 2025, BrowseComp, GPQA, Humanity's Last Exam, SWE-Bench Verified), while Sarvam-105B is better at 0 benchmarks.

Claude Opus 4.6 significantly outperforms across most benchmarks.

Tue Mar 17 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Tue Mar 17 2026 • llm-stats.com
Anthropic
Claude Opus 4.6
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Sarvam AI
Sarvam-105B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Input Capabilities

Supported data types and modalities

Claude Opus 4.6 supports multimodal inputs, whereas Sarvam-105B does not.

Claude Opus 4.6 can handle both text and other forms of data like images, making it suitable for multimodal applications.

Claude Opus 4.6

Text
Images
Audio
Video

Sarvam-105B

Text
Images
Audio
Video

License

Usage and distribution terms

Claude Opus 4.6 is licensed under a proprietary license, while Sarvam-105B uses Apache 2.0.

License differences may affect how you can use these models in commercial or open-source projects.

Claude Opus 4.6

Proprietary

Closed source

Sarvam-105B

Apache 2.0

Open weights

Release Timeline

When each model was launched

Claude Opus 4.6 was released on 2026-02-05, while Sarvam-105B was released on 2026-03-06.

Sarvam-105B is 1 month newer than Claude Opus 4.6.

Claude Opus 4.6

Feb 5, 2026

1 months ago

Sarvam-105B

Mar 6, 2026

1 weeks ago

4w newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Supports multimodal inputs
Higher AIME 2025 score (99.8% vs 96.7%)
Higher BrowseComp score (84.0% vs 49.5%)
Higher GPQA score (91.3% vs 78.7%)
Higher Humanity's Last Exam score (53.1% vs 11.2%)
Higher SWE-Bench Verified score (80.8% vs 45.0%)
Has open weights

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude Opus 4.6
Sarvam AI
Sarvam-105B