DROP
DROP (Discrete Reasoning Over Paragraphs) is a reading comprehension benchmark requiring discrete reasoning over paragraph content. It contains crowdsourced, adversarially-created questions that require resolving references and performing discrete operations like addition, counting, or sorting, demanding comprehensive paragraph understanding beyond paraphrase-and-entity-typing shortcuts.
Progress Over Time
Interactive timeline showing model performance evolution on DROP
State-of-the-art frontier
Open
Proprietary
DROP Leaderboard
29 models
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | DeepSeek | 671B | 131K | $0.27 / $1.10 | ||
| 2 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 2 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 4 | OpenAI | — | 128K | $10.00 / $30.00 | ||
| 5 | Amazon | — | 300K | $0.80 / $3.20 | ||
| 6 | 405B | 128K | $0.89 / $0.89 | |||
| 7 | OpenAI | — | 128K | $2.50 / $10.00 | ||
| 8 | Anthropic | — | 200K | $0.80 / $4.00 | ||
| 8 | Anthropic | — | 200K | $15.00 / $75.00 | ||
| 10 | OpenAI | — | 33K | $30.00 / $60.00 | ||
| 11 | Amazon | — | 300K | $0.06 / $0.24 | ||
| 12 | OpenAI | — | 128K | $0.15 / $0.60 | ||
| 13 | 70B | 128K | $0.20 / $0.20 | |||
| 14 | Amazon | — | 128K | $0.03 / $0.14 | ||
| 15 | Meituan | 560B | 128K | $0.30 / $1.20 | ||
| 16 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 17 | Anthropic | — | 200K | $0.25 / $1.25 | ||
| 18 | Microsoft | 15B | 16K | $0.07 / $0.14 | ||
| 19 | Google | — | 2.1M | $2.50 / $10.00 | ||
| 20 | OpenAI | — | 16K | $0.50 / $1.50 | ||
| 21 | 2B | — | — | |||
| 21 | Google | 8B | — | — | ||
| 23 | 8B | 131K | $0.03 / $0.03 | |||
| 24 | 8B | 128K | $0.50 / $0.50 | |||
| 25 | Google | 8B | — | — | ||
| 25 | 2B | — | — | |||
| 27 | 7B | — | — | |||
| 28 | 8B | — | — | |||
| 29 | Baidu | 21B | 128K | $0.40 / $4.00 |
Notice missing or incorrect data?
FAQ
Common questions about DROP
DROP (Discrete Reasoning Over Paragraphs) is a reading comprehension benchmark requiring discrete reasoning over paragraph content. It contains crowdsourced, adversarially-created questions that require resolving references and performing discrete operations like addition, counting, or sorting, demanding comprehensive paragraph understanding beyond paraphrase-and-entity-typing shortcuts.
The DROP paper is available at https://arxiv.org/abs/1903.00161. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The DROP leaderboard ranks 29 AI models based on their performance on this benchmark. Currently, DeepSeek-V3 by DeepSeek leads with a score of 0.916. The average score across all models is 0.720.
The highest DROP score is 0.916, achieved by DeepSeek-V3 from DeepSeek.
29 models have been evaluated on the DROP benchmark, with 0 verified results and 28 self-reported results.
DROP is categorized under math and reasoning. The benchmark evaluates text models.