Model Comparison
Qwen3.5-397B-A17B vs Phi-3.5-MoE-instruct
Qwen3.5-397B-A17B significantly outperforms across most benchmarks.
Performance Benchmarks
Comparative analysis across standard metrics
Qwen3.5-397B-A17B outperforms in 3 benchmarks (GPQA, MMLU-Pro, MMMLU), while Phi-3.5-MoE-instruct is better at 0 benchmarks.
Qwen3.5-397B-A17B significantly outperforms across most benchmarks.
Arena Performance
Human preference votes
Pricing Analysis
Price comparison per million tokens
Cost data unavailable.
Model Size
Parameter count comparison
Qwen3.5-397B-A17B has 337.0B more parameters than Phi-3.5-MoE-instruct, making it 561.7% larger.
Context Window
Maximum input and output token capacity
Only Qwen3.5-397B-A17B specifies input context (262,144 tokens). Only Qwen3.5-397B-A17B specifies output context (64,000 tokens).
Input Capabilities
Supported data types and modalities
Qwen3.5-397B-A17B supports multimodal inputs, whereas Phi-3.5-MoE-instruct does not.
Qwen3.5-397B-A17B can handle both text and other forms of data like images, making it suitable for multimodal applications.
Qwen3.5-397B-A17B
Phi-3.5-MoE-instruct
License
Usage and distribution terms
Qwen3.5-397B-A17B is licensed under Apache 2.0, while Phi-3.5-MoE-instruct uses MIT.
License differences may affect how you can use these models in commercial or open-source projects.
Apache 2.0
Open weights
MIT
Open weights
Release Timeline
When each model was launched
Qwen3.5-397B-A17B was released on 2026-02-16, while Phi-3.5-MoE-instruct was released on 2024-08-23.
Qwen3.5-397B-A17B is 18 months newer than Phi-3.5-MoE-instruct.
Feb 16, 2026
1 months ago
1.5yr newerAug 23, 2024
1.6 years ago
Knowledge Cutoff
When training data ends
Neither model specifies a knowledge cutoff date.
Unable to compare the recency of their training data.
Outputs Comparison
Key Takeaways
Qwen3.5-397B-A17B
View detailsAlibaba Cloud / Qwen Team
Phi-3.5-MoE-instruct
View detailsMicrosoft
Detailed Comparison
| Feature |
|---|
FAQ
Common questions about Qwen3.5-397B-A17B vs Phi-3.5-MoE-instruct