Model Comparison

GPT OSS 120B vs MiMo-V2-Flash

MiMo-V2-Flash significantly outperforms across most benchmarks. MiMo-V2-Flash is 1.2x cheaper per token.

Performance Benchmarks

Comparative analysis across standard metrics

2 benchmarks

GPT OSS 120B outperforms in 0 benchmarks, while MiMo-V2-Flash is better at 2 benchmarks (GPQA, Humanity's Last Exam).

MiMo-V2-Flash significantly outperforms across most benchmarks.

Thu May 14 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

MiMo-V2-Flash costs less

For input processing, GPT OSS 120B ($0.09/1M tokens) is 1.1x cheaper than MiMo-V2-Flash ($0.10/1M tokens).

For output processing, GPT OSS 120B ($0.45/1M tokens) is 1.5x more expensive than MiMo-V2-Flash ($0.30/1M tokens).

In conclusion, GPT OSS 120B is more expensive than MiMo-V2-Flash.*

* Using a 3:1 ratio of input to output tokens

Lowest available price from all providers
Thu May 14 2026 • llm-stats.com
OpenAI
GPT OSS 120B
Input tokens$0.09
Output tokens$0.45
Best providerDeepinfra
Xiaomi
MiMo-V2-Flash
Input tokens$0.10
Output tokens$0.30
Best providerXiaomi
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

192.2B diff

MiMo-V2-Flash has 192.2B more parameters than GPT OSS 120B, making it 164.6% larger.

OpenAI
GPT OSS 120B
116.8Bparameters
Xiaomi
MiMo-V2-Flash
309.0Bparameters
116.8B
GPT OSS 120B
309.0B
MiMo-V2-Flash

Context Window

Maximum input and output token capacity

MiMo-V2-Flash accepts 256,000 input tokens compared to GPT OSS 120B's 131,072 tokens. GPT OSS 120B can generate longer responses up to 131,072 tokens, while MiMo-V2-Flash is limited to 16,384 tokens.

OpenAI
GPT OSS 120B
Input131,072 tokens
Output131,072 tokens
Xiaomi
MiMo-V2-Flash
Input256,000 tokens
Output16,384 tokens
Thu May 14 2026 • llm-stats.com

License

Usage and distribution terms

GPT OSS 120B is licensed under Apache 2.0, while MiMo-V2-Flash uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

GPT OSS 120B

Apache 2.0

Open weights

MiMo-V2-Flash

MIT

Open weights

Release Timeline

When each model was launched

GPT OSS 120B was released on 2025-08-05, while MiMo-V2-Flash was released on 2025-12-16.

MiMo-V2-Flash is 4 months newer than GPT OSS 120B.

GPT OSS 120B

Aug 5, 2025

9 months ago

MiMo-V2-Flash

Dec 16, 2025

4 months ago

4mo newer

Knowledge Cutoff

When training data ends

Neither model specifies a knowledge cutoff date.

Unable to compare the recency of their training data.

No cutoff dates available

Provider Availability

GPT OSS 120B is available from DeepInfra, Novita, OpenAI, Fireworks, Groq. MiMo-V2-Flash is available from Xiaomi.

GPT OSS 120B

deepinfra logo
Deepinfra
Input Price:Input: $0.09/1MOutput Price:Output: $0.45/1M
novita logo
Novita
Input Price:Input: $0.10/1MOutput Price:Output: $0.50/1M
openai logo
OpenAI
Input Price:Input: $0.10/1MOutput Price:Output: $0.50/1M
fireworks logo
Fireworks
Input Price:Input: $0.15/1MOutput Price:Output: $0.60/1M
groq logo
Groq
Input Price:Input: $0.15/1MOutput Price:Output: $0.60/1M

MiMo-V2-Flash

xiaomi logo
Xiaomi
Input Price:Input: $0.10/1MOutput Price:Output: $0.30/1M
* Prices shown are per million tokens

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Less expensive input tokens
Larger context window (256,000 tokens)
Less expensive output tokens
Higher GPQA score (83.7% vs 80.1%)
Higher Humanity's Last Exam score (22.1% vs 14.9%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
GPT OSS 120B
Xiaomi
MiMo-V2-Flash

FAQ

Common questions about GPT OSS 120B vs MiMo-V2-Flash.

Which is better, GPT OSS 120B or MiMo-V2-Flash?

MiMo-V2-Flash significantly outperforms across most benchmarks. GPT OSS 120B is made by OpenAI and MiMo-V2-Flash is made by Xiaomi. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.

How does GPT OSS 120B compare to MiMo-V2-Flash in benchmarks?

GPT OSS 120B scores MMLU: 90.0%, CodeForces: 82.1%, GPQA: 80.1%, TAU-bench Retail: 67.8%, HealthBench: 57.6%. MiMo-V2-Flash scores AIME 2025: 94.1%, Arena-Hard v2: 86.2%, MMLU-Pro: 84.9%, HMMT 2025: 84.4%, GPQA: 83.7%.

Is GPT OSS 120B cheaper than MiMo-V2-Flash?

GPT OSS 120B is 1.1x cheaper for input tokens. GPT OSS 120B costs $0.09/M input and $0.45/M output via deepinfra. MiMo-V2-Flash costs $0.10/M input and $0.30/M output via xiaomi.

What are the context window sizes for GPT OSS 120B and MiMo-V2-Flash?

GPT OSS 120B supports 131K tokens and MiMo-V2-Flash supports 256K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.

What are the main differences between GPT OSS 120B and MiMo-V2-Flash?

Key differences include context window (131K vs 256K), input pricing ($0.09 vs $0.10/M), licensing (Apache 2.0 vs MIT). See the full comparison above for benchmark-by-benchmark results.

Who makes GPT OSS 120B and MiMo-V2-Flash?

GPT OSS 120B is developed by OpenAI and MiMo-V2-Flash is developed by Xiaomi.