Codestral-22B vs Phi-3.5-mini-instruct Comparison
Comparing Codestral-22B and Phi-3.5-mini-instruct across benchmarks, pricing, and capabilities.
Performance Benchmarks
Comparative analysis across standard metrics
Codestral-22B outperforms in 2 benchmarks (HumanEval, MBPP), while Phi-3.5-mini-instruct is better at 0 benchmarks.
Codestral-22B significantly outperforms across most benchmarks.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
Cost data unavailable.
Model Size
Parameter count comparison
Codestral-22B has 18.4B more parameters than Phi-3.5-mini-instruct, making it 484.2% larger.
Context Window
Maximum input and output token capacity
Only Phi-3.5-mini-instruct specifies input context (128,000 tokens). Only Phi-3.5-mini-instruct specifies output context (128,000 tokens).
License
Usage and distribution terms
Codestral-22B is licensed under MNPL-0.1, while Phi-3.5-mini-instruct uses MIT.
License differences may affect how you can use these models in commercial or open-source projects.
MNPL-0.1
Open weights
MIT
Open weights
Release Timeline
When each model was launched
Codestral-22B was released on 2024-05-29, while Phi-3.5-mini-instruct was released on 2024-08-23.
Phi-3.5-mini-instruct is 3 months newer than Codestral-22B.
May 29, 2024
1.8 years ago
Aug 23, 2024
1.6 years ago
2mo newerKnowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Outputs Comparison
Key Takeaways
Codestral-22B
View detailsMistral AI
Phi-3.5-mini-instruct
View detailsMicrosoft
Detailed Comparison
| Feature |
|---|