TriviaQA

A large-scale reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents (six per question on average) that provide high quality distant supervision for answering the questions. The dataset features relatively complex, compositional questions with considerable syntactic and lexical variability, requiring cross-sentence reasoning to find answers.

Paper

Progress Over Time

Interactive timeline showing model performance evolution on TriviaQA

State-of-the-art frontier
Open
Proprietary

TriviaQA Leaderboard

17 models
ContextCostLicense
1
Moonshot AI
Moonshot AI
1.0T
227B
324B
324B128K$0.10 / $0.30
524B
68B
79B
8
Mistral AI
Mistral AI
675B128K$2.00 / $5.00
814B
1012B128K$0.15 / $0.15
112B
118B
138B
148B128K$0.10 / $0.10
158B
152B
173B
Notice missing or incorrect data?

FAQ

Common questions about TriviaQA

A large-scale reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents (six per question on average) that provide high quality distant supervision for answering the questions. The dataset features relatively complex, compositional questions with considerable syntactic and lexical variability, requiring cross-sentence reasoning to find answers.
The TriviaQA paper is available at https://arxiv.org/abs/1705.03551. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The TriviaQA leaderboard ranks 17 AI models based on their performance on this benchmark. Currently, Kimi K2 Base by Moonshot AI leads with a score of 0.851. The average score across all models is 0.731.
The highest TriviaQA score is 0.851, achieved by Kimi K2 Base from Moonshot AI.
17 models have been evaluated on the TriviaQA benchmark, with 0 verified results and 17 self-reported results.
TriviaQA is categorized under general and reasoning. The benchmark evaluates text models.