Model Comparison
DeepSeek-R1 vs Codestral-22B
Comparing DeepSeek-R1 and Codestral-22B across benchmarks, pricing, and capabilities.
Performance Benchmarks
Comparative analysis across standard metrics
DeepSeek-R1 and Codestral-22B don't have any common benchmark datasets to compare. They may have been evaluated on different testing suites.
Arena Performance
Human preference votes
Model Size
Parameter count comparison
DeepSeek-R1 has 648.8B more parameters than Codestral-22B, making it 2922.5% larger.
Context Window
Maximum input and output token capacity
Only DeepSeek-R1 specifies input context (131,072 tokens). Only DeepSeek-R1 specifies output context (131,072 tokens).
License
Usage and distribution terms
DeepSeek-R1 is licensed under MIT, while Codestral-22B uses MNPL-0.1.
License differences may affect how you can use these models in commercial or open-source projects.
MIT
Open weights
MNPL-0.1
Open weights
Release Timeline
When each model was launched
DeepSeek-R1 was released on 2025-01-20, while Codestral-22B was released on 2024-05-29.
DeepSeek-R1 is 8 months newer than Codestral-22B.
Jan 20, 2025
1.3 years ago
7mo newerMay 29, 2024
1.9 years ago
Knowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Outputs Comparison
Key Takeaways
DeepSeek-R1
View detailsDeepSeek
Codestral-22B
View detailsMistral AI
No standout differentiators in the data we have for this pair.
Detailed Comparison
| Feature |
|---|
FAQ
Common questions about DeepSeek-R1 vs Codestral-22B.