ZAI logo

GLM-4.7

Overview

Overview

GLM 4.7 is a coding‑centric model that thinks before acting, preserves its reasoning across turns, and lets you control thinking per request for speed or accuracy. It upgrades agentic workflows with stronger multi‑step tool use, better terminal and multilingual coding, and a noticeable jump in UI output quality for modern, clean webpages and slides. You can use it in popular coding agents, call it via the Z.ai API, and even run it locally with public weights on HuggingFace and ModelScope using vLLM or SGLang.

GLM-4.7 was released on December 22, 2025. API access is available through Novita.

Performance

Timeline

ReleasedUnknown
Knowledge CutoffUnknown

Specifications

Parameters
358.0B
License
MIT
Training Data
Unknown

Benchmarks

Benchmarks

GLM-4.7 Performance Across Datasets

Scores sourced from the model's scorecard, paper, or official blog posts

LLM Stats Logollm-stats.com - Sat Jan 24 2026
Notice missing or incorrect data?Start an Issue discussion

Pricing

Pricing

Pricing, performance, and capabilities for GLM-4.7 across different providers:

ProviderInput ($/M)Output ($/M)Max InputMax OutputLatency (s)ThroughputQuantizationInputOutput
Novita logo
Novitabf16
$0.60$2.20204.8K131.1K
bf16
Text
Image
Audio
Video
Text
Image
Audio
Video

API Access

API Access Coming Soon

API access for GLM-4.7 will be available soon through our gateway.

Recent Posts

Recent Reviews

Blog Posts

FAQ

Common questions about GLM-4.7

GLM-4.7 was released on December 22, 2025 by ZAI.
GLM-4.7 was created by ZAI.
GLM-4.7 has 358.0 billion parameters.
GLM-4.7 is released under the MIT license. This is an open-source/open-weight license.
Yes, GLM-4.7 is a multimodal model that can process both text and images as input.