Model Comparison

Codestral-22B vs o1-mini

o1-mini significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

Codestral-22B outperforms in 0 benchmarks, while o1-mini is better at 1 benchmark (HumanEval).

o1-mini significantly outperforms across most benchmarks.

Wed Apr 22 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 22 2026 • llm-stats.com
Mistral AI
Codestral-22B
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
OpenAI
o1-mini
Input tokens$3.00
Output tokens$12.00
Best providerOpenAI
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only o1-mini specifies input context (128,000 tokens). Only o1-mini specifies output context (65,536 tokens).

Mistral AI
Codestral-22B
Input- tokens
Output- tokens
OpenAI
o1-mini
Input128,000 tokens
Output65,536 tokens
Wed Apr 22 2026 • llm-stats.com

License

Usage and distribution terms

Codestral-22B is licensed under MNPL-0.1, while o1-mini uses a proprietary license.

License differences may affect how you can use these models in commercial or open-source projects.

Codestral-22B

MNPL-0.1

Open weights

o1-mini

Proprietary

Closed source

Release Timeline

When each model was launched

Codestral-22B was released on 2024-05-29, while o1-mini was released on 2024-09-12.

o1-mini is 4 months newer than Codestral-22B.

Codestral-22B

May 29, 2024

1.9 years ago

o1-mini

Sep 12, 2024

1.6 years ago

3mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Has open weights
Larger context window (128,000 tokens)
Higher HumanEval score (92.4% vs 81.1%)

Detailed Comparison

AI Model Comparison Table
Feature
Mistral AI
Codestral-22B
OpenAI
o1-mini

FAQ

Common questions about Codestral-22B vs o1-mini

o1-mini significantly outperforms across most benchmarks. Codestral-22B is made by Mistral AI and o1-mini is made by OpenAI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Codestral-22B scores HumanEvalFIM-Average: 91.6%, HumanEval: 81.1%, MBPP: 78.2%, Spider: 63.5%, HumanEval-Average: 61.5%. o1-mini scores HumanEval: 92.4%, MATH-500: 90.0%, MMLU: 85.2%, SuperGLUE: 75.0%, GPQA: 60.0%.
Codestral-22B supports an unknown number of tokens and o1-mini supports 128K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include licensing (MNPL-0.1 vs Proprietary). See the full comparison above for benchmark-by-benchmark results.
Codestral-22B is developed by Mistral AI and o1-mini is developed by OpenAI.