MiniMax logo

MiniMax M1 40K

Overview

MiniMax-M1 is an open-source, large-scale reasoning model that uses a hybrid-attention architecture for efficient long-context processing. It supports up to a 1 million token context window and 80,000-token reasoning output, matching Gemini 2.5 Pro’s scale while being highly cost-effective. Its Lightning Attention mechanism reduces compute requirements to about 30% of DeepSeek R1’s, and a new reinforcement learning algorithm, CISPO, doubles convergence speed compared to other RL methods. Trained on 512 H800s over three weeks, M1 achieves near state-of-the-art results across software engineering, long-context, and tool-use benchmarks, outperforming most open models and rivaling top closed systems.

MiniMax M1 40K was released on June 16, 2025.

Performance

Timeline

ReleasedUnknown
Knowledge CutoffUnknown

Specifications

Parameters
456.0B
License
MIT
Training Data
Unknown

Benchmarks

MiniMax M1 40K Performance Across Datasets

Scores sourced from the model's scorecard, paper, or official blog posts

LLM Stats Logollm-stats.com - Fri Jan 02 2026
Notice missing or incorrect data?Start an Issue discussion

Pricing

Pricing, performance, and capabilities for MiniMax M1 40K across different providers:

No pricing information available for this model.

API Access

API Access Coming Soon

API access for MiniMax M1 40K will be available soon through our gateway.

Recent Posts

Recent Reviews

FAQ

Common questions about MiniMax M1 40K

MiniMax M1 40K was released on June 16, 2025.
MiniMax M1 40K has 456.0 billion parameters.