LLM Updates Today
Real-time AI news, community insights, and new model releases. Your source for benchmarking and developments in artificial intelligence.
LLM News & Updates
View all postsTop community discussions and insights from this week
No posts available this week. Check back soon!
AI Updates Today: New LLM Models This Week
View leaderboardRecently released language models and their benchmark performance
No new models released this week. Check back soon!
New Open-Source LLM Models This Week
View all open modelsRecently released open-source language models
No new open-source models released this week. Check back soon!
Today's Highlights
Benchmarking and News About AI
Comprehensive AI model evaluation, real-world arena testing, and latest research insights
Popular Benchmarks
View allGPQA
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
MMLU
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
MMLU-Pro
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
MATH
MATH dataset contains 12,500 challenging competition mathematics problems from AMC 10, AMC 12, AIME, and other mathematics competitions. Each problem includes full step-by-step solutions and spans multiple difficulty levels (1-5) across seven mathematical subjects including Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus.
HumanEval
A benchmark that measures functional correctness for synthesizing programs from docstrings, consisting of 164 original programming problems assessing language comprehension, algorithms, and simple mathematics
AIME 2025
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
LiveCodeBench
LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.
IFEval
Instruction-Following Evaluation (IFEval) benchmark for large language models, focusing on verifiable instructions with 25 types of instructions and around 500 prompts containing one or more verifiable constraints
LLM Arenas
View all arenasPopular Comparisons
Compare any modelsLatest Blogs
View all articles
DeepSeek V3.2-Exp Release: Pricing, API Costs, Context Window & Benchmarks
A deep dive into DeepSeek-V3.2-Exp, the new sparse-attention model that slashes API costs while pushing long-context efficiency.

Claude Sonnet 4.5 vs GPT-5: Complete AI Model Comparison 2025
Compare Claude Sonnet 4.5 and GPT-5 across performance, safety, and applications. Discover which AI model best fits your needs in our detailed analysis.

GLM-4.6: Complete Guide, Pricing, Context Window, and API Access
A comprehensive look at GLM-4.6 - Zhipu AI's latest release with 128k context window, agentic capabilities, pricing, API details, benchmarks, and what it means for developers and enterprises.
