Phi-3.5-MoE-instruct
MicrosoftOverview
Phi-3.5-MoE-instruct is a mixture-of-experts model with ~42B total parameters (6.6B active) and a 128K context window. It excels at reasoning, math, coding, and multilingual tasks, outperforming larger dense models in many benchmarks. It underwent a thorough safety post-training process (SFT + DPO) and is licensed under MIT. This model is ideal for scenarios where efficiency and high performance are both required, particularly in multi-lingual or reasoning-intensive tasks.
Phi-3.5-MoE-instruct was released on August 23, 2024.
Performance
Timeline
Other Details
Related Models
Compare Phi-3.5-MoE-instruct to other models by quality (GPQA score) vs cost. Higher scores and lower costs represent better value.
Performance visualization loading...
Gathering benchmark data from similar models
Benchmarks
Phi-3.5-MoE-instruct Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Pricing
Pricing, performance, and capabilities for Phi-3.5-MoE-instruct across different providers:
Example Outputs
Recent Posts
Recent Reviews
API Access
API Access Coming Soon
API access for Phi-3.5-MoE-instruct will be available soon through our gateway.
FAQ
Common questions about Phi-3.5-MoE-instruct
