Model Comparison

GPT-4 vs Kimi-k1.5

Kimi-k1.5 significantly outperforms across most benchmarks.

Performance Benchmarks

Comparative analysis across standard metrics

1 benchmarks

GPT-4 outperforms in 0 benchmarks, while Kimi-k1.5 is better at 1 benchmark (MMLU).

Kimi-k1.5 significantly outperforms across most benchmarks.

Fri Apr 03 2026 • llm-stats.com

Arena Performance

Human preference votes

Pricing Analysis

Price comparison per million tokens

Cost data unavailable.

Lowest available price from all providers
Fri Apr 03 2026 • llm-stats.com
OpenAI
GPT-4
Input tokens$30.00
Output tokens$60.00
Best providerAzure
Moonshot AI
Kimi-k1.5
Input tokens$0.00
Output tokens$0.00
Best providerUnknown Organization
Notice missing or incorrect data?Start an Issue

Context Window

Maximum input and output token capacity

Only GPT-4 specifies input context (32,768 tokens). Only GPT-4 specifies output context (32,768 tokens).

OpenAI
GPT-4
Input32,768 tokens
Output32,768 tokens
Moonshot AI
Kimi-k1.5
Input- tokens
Output- tokens
Fri Apr 03 2026 • llm-stats.com

Input Capabilities

Supported data types and modalities

Both GPT-4 and Kimi-k1.5 support multimodal inputs.

They are both capable of processing various types of data, offering versatility in application.

GPT-4

Text
Images
Audio
Video

Kimi-k1.5

Text
Images
Audio
Video

License

Usage and distribution terms

Both models are licensed under proprietary licenses.

Both models have usage restrictions defined by their respective organizations.

GPT-4

Proprietary

Closed source

Kimi-k1.5

Proprietary

Closed source

Release Timeline

When each model was launched

GPT-4 was released on 2023-06-13, while Kimi-k1.5 was released on 2025-01-20.

Kimi-k1.5 is 20 months newer than GPT-4.

GPT-4

Jun 13, 2023

2.8 years ago

Kimi-k1.5

Jan 20, 2025

1.2 years ago

1.6yr newer

Knowledge Cutoff

When training data ends

GPT-4 has a documented knowledge cutoff of 2022-12-31, while Kimi-k1.5's cutoff date is not specified.

We can confirm GPT-4's training data extends to 2022-12-31, but cannot make a direct comparison without Kimi-k1.5's cutoff date.

GPT-4

Dec 2022

Kimi-k1.5

Outputs Comparison

Notice missing or incorrect data?Start an Issue discussion

Key Takeaways

Larger context window (32,768 tokens)
Higher MMLU score (87.4% vs 86.4%)

Detailed Comparison

AI Model Comparison Table
Feature
OpenAI
GPT-4
Moonshot AI
Kimi-k1.5

FAQ

Common questions about GPT-4 vs Kimi-k1.5

Kimi-k1.5 significantly outperforms across most benchmarks. GPT-4 is made by OpenAI and Kimi-k1.5 is made by Moonshot AI. The best choice depends on your use case — compare their benchmark scores, pricing, and capabilities above.
GPT-4 scores AI2 Reasoning Challenge (ARC): 96.3%, HellaSwag: 95.3%, Uniform Bar Exam: 90.0%, SAT Math: 89.0%, LSAT: 88.0%. Kimi-k1.5 scores MATH-500: 96.2%, CLUEWSC: 91.4%, C-Eval: 88.3%, MMLU: 87.4%, IFEval: 87.2%.
GPT-4 supports 33K tokens and Kimi-k1.5 supports an unknown number of tokens. A larger context window lets you process longer documents, conversations, or codebases in a single request.
GPT-4 is developed by OpenAI and Kimi-k1.5 is developed by Moonshot AI.