Model Comparison

Gemma 3n E2B Instructed vs GLM-4.5

GLM-4.5 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

3 benchmarks

Gemma 3n E2B Instructed outperforms in 0 benchmarks, while GLM-4.5 is better at 3 benchmarks (GPQA, LiveCodeBench, MMLU-Pro).

GLM-4.5 significantly outperforms across most benchmarks.

Wed Apr 22 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Wed Apr 22 2026 • llm-stats.com
Google
Gemma 3n E2B Instructed
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Zhipu AI
GLM-4.5
Input tokens$0.40
Output tokens$1.60
Best providerDeepinfra
Notice missing or incorrect data?Start an Issue

Model Size

Parameter count comparison

347.0B diff

GLM-4.5 has 347.0B more parameters than Gemma 3n E2B Instructed, making it 4337.5% larger.

Google
Gemma 3n E2B Instructed
8.0Bparameters
Zhipu AI
GLM-4.5
355.0Bparameters
8.0B
Gemma 3n E2B Instructed
355.0B
GLM-4.5

Context Window

Maximum input and output token capacity

Only GLM-4.5 specifies input context (131,072 tokens). Only GLM-4.5 specifies output context (131,072 tokens).

Google
Gemma 3n E2B Instructed
Input- tokens
Output- tokens
Zhipu AI
GLM-4.5
Input131,072 tokens
Output131,072 tokens
Wed Apr 22 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Gemma 3n E2B Instructed supports multimodal inputs, whereas GLM-4.5 does not.

Gemma 3n E2B Instructed can handle both text and other forms of data like images, making it suitable for multimodal applications.

Gemma 3n E2B Instructed

Text
Images
Audio
Video

GLM-4.5

Text
Images
Audio
Video

License

Usage and distribution terms

Gemma 3n E2B Instructed is licensed under a proprietary license, while GLM-4.5 uses MIT.

License differences may affect how you can use these models in commercial or open-source projects.

Gemma 3n E2B Instructed

Proprietary

Closed source

GLM-4.5

MIT

Open weights

Release Timeline

When each model was launched

Gemma 3n E2B Instructed was released on 2025-06-26, while GLM-4.5 was released on 2025-07-28.

GLM-4.5 is 1 month newer than Gemma 3n E2B Instructed.

Gemma 3n E2B Instructed

Jun 26, 2025

10 months ago

GLM-4.5

Jul 28, 2025

8 months ago

1mo newer

Knowledge Cutoff

When training data ends

Gemma 3n E2B Instructed has a documented knowledge cutoff of 2024-06-01, while GLM-4.5's cutoff date is not specified.

We can confirm Gemma 3n E2B Instructed's training data extends to 2024-06-01, but cannot make a direct comparison without GLM-4.5's cutoff date.

Gemma 3n E2B Instructed

Jun 2024

GLM-4.5

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Supports multimodal inputs
Larger context window (131,072 tokens)
Has open weights
Higher GPQA score (79.1% vs 24.8%)
Higher LiveCodeBench score (72.9% vs 13.2%)
Higher MMLU-Pro score (84.6% vs 40.5%)

Detailed Comparison

AI Model Comparison Table
Feature
Google
Gemma 3n E2B Instructed
Zhipu AI
GLM-4.5

FAQ

Common questions about Gemma 3n E2B Instructed vs GLM-4.5

GLM-4.5 significantly outperforms across most benchmarks. Gemma 3n E2B Instructed is made by Google and GLM-4.5 is made by Zhipu AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
Gemma 3n E2B Instructed scores HumanEval: 66.5%, MMLU: 60.1%, Global-MMLU-Lite: 59.0%, MBPP: 56.6%, Global-MMLU: 55.1%. GLM-4.5 scores MATH-500: 98.2%, AIME 2024: 91.0%, MMLU-Pro: 84.6%, TAU-bench Retail: 79.7%, GPQA: 79.1%.
Gemma 3n E2B Instructed supports an unknown number of tokens and GLM-4.5 supports 131K tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
Key differences include multimodal support (yes vs no), licensing (Proprietary vs MIT). See the full comparison above for benchmark-by-benchmark results.
Gemma 3n E2B Instructed is developed by Google and GLM-4.5 is developed by Zhipu AI.