MiniMax logo

MiniMax M1 80K

Overview

MiniMax-M1 is an open-source, large-scale reasoning model that uses a hybrid-attention architecture for efficient long-context processing. It supports up to a 1 million token context window and 80,000-token reasoning output, matching Gemini 2.5 Pro’s scale while being highly cost-effective. Its Lightning Attention mechanism reduces compute requirements to about 30% of DeepSeek R1’s, and a new reinforcement learning algorithm, CISPO, doubles convergence speed compared to other RL methods. Trained on 512 H800s over three weeks, M1 achieves near state-of-the-art results across software engineering, long-context, and tool-use benchmarks, outperforming most open models and rivaling top closed systems.

MiniMax M1 80K was released on June 16, 2025. API access is available through Novita.

Performance

Timeline

ReleasedUnknown
Knowledge CutoffUnknown

Specifications

Parameters
456.0B
License
MIT
Training Data
Unknown

Benchmarks

MiniMax M1 80K Performance Across Datasets

Scores sourced from the model's scorecard, paper, or official blog posts

LLM Stats Logollm-stats.com - Fri Jan 02 2026
Notice missing or incorrect data?Start an Issue discussion

Pricing

Pricing, performance, and capabilities for MiniMax M1 80K across different providers:

ProviderInput ($/M)Output ($/M)Max InputMax OutputLatency (s)ThroughputQuantizationInputOutput
Novita logo
Novitabf16
$0.55$2.201.0M40.0K
bf16
Text
Image
Audio
Video
Text
Image
Audio
Video

API Access

API Access Coming Soon

API access for MiniMax M1 80K will be available soon through our gateway.

Recent Posts

Recent Reviews

FAQ

Common questions about MiniMax M1 80K

MiniMax M1 80K was released on June 16, 2025.
MiniMax M1 80K has 456.0 billion parameters.