Claude 3.5 Haiku vs Codestral-22B Comparison

Comparing Claude 3.5 Haiku and Codestral-22B across benchmarks, pricing, and capabilities.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

Claude 3.5 Haiku outperforms in 1 benchmarks (HumanEval), while Codestral-22B is better at 0 benchmarks.

Claude 3.5 Haiku significantly outperforms across most benchmarks.

Thu Mar 19 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Thu Mar 19 2026 • llm-stats.com
Anthropic
Claude 3.5 Haiku
Input tokens$0.80
Output tokens$4.00
Best providerAWS Bedrock
Mistral AI
Codestral-22B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only Claude 3.5 Haiku specifies input context (200,000 tokens). Only Claude 3.5 Haiku specifies output context (200,000 tokens).

Anthropic
Claude 3.5 Haiku
Input200,000 tokens
Output200,000 tokens
Mistral AI
Codestral-22B
Input- tokens
Output- tokens
Thu Mar 19 2026 • llm-stats.com

License

Usage and distribution terms

Claude 3.5 Haiku is licensed under a proprietary license, while Codestral-22B uses MNPL-0.1.

License differences may affect how you can use these models in commercial or open-source projects.

Claude 3.5 Haiku

Proprietary

Closed source

Codestral-22B

MNPL-0.1

Open weights

Release Timeline

When each model was launched

Claude 3.5 Haiku was released on 2024-10-22, while Codestral-22B was released on 2024-05-29.

Claude 3.5 Haiku is 5 months newer than Codestral-22B.

Claude 3.5 Haiku

Oct 22, 2024

1.4 years ago

4mo newer
Codestral-22B

May 29, 2024

1.8 years ago

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (200,000 tokens)
Higher HumanEval score (88.1% vs 81.1%)
Has open weights

Detailed Comparison

AI Model Comparison Table
Feature
Anthropic
Claude 3.5 Haiku
Mistral AI
Codestral-22B