Live Updates

AI News Today

Latest AI news, community discussions, and new model releases. Stay informed with real-time updates from the AI community.

LLM News & Updates

View all posts

Top community discussions and insights from this week

General
@manumaniac

Making this day a great day!

Most people wait for motivation to act. The 10x operator knows: Action creates Motivation. Stop reading. Go do the hardest thing on your list for 20 minutes.

AI Updates Today: New LLM Models This Week

View leaderboard

Recently released language models and their benchmark performance

No new models released this week. Check back soon!

New Open-Source LLM Models This Week

View all open models

Recently released open-source language models

No new open-source models released this week. Check back soon!

Today's Highlights

Benchmarking and News About AI

Comprehensive AI model evaluation, real-world arena testing, and latest research insights

Popular Benchmarks

View all

GPQA

generalreasoning

A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.

139 models
View

MMLU

generallanguage

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

91 models
View

MMLU-Pro

generallanguage

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

82 models
View

MATH

mathreasoning

MATH dataset contains 12,500 challenging competition mathematics problems from AMC 10, AMC 12, AIME, and other mathematics competitions. Each problem includes full step-by-step solutions and spans multiple difficulty levels (1-5) across seven mathematical subjects including Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus.

64 models
View

HumanEval

codereasoning

A benchmark that measures functional correctness for synthesizing programs from docstrings, consisting of 164 original programming problems assessing language comprehension, algorithms, and simple mathematics

63 models
View

AIME 2025

mathreasoning

All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.

61 models
View

LiveCodeBench

codegeneral

LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.

54 models
View

IFEval

codegeneral

Instruction-Following Evaluation (IFEval) benchmark for large language models, focusing on verifiable instructions with 25 types of instructions and around 500 prompts containing one or more verifiable constraints

52 models
View