BFCL

The Berkeley Function Calling Leaderboard (BFCL) is the first comprehensive and executable function call evaluation dedicated to assessing Large Language Models' ability to invoke functions. It evaluates serial and parallel function calls across multiple programming languages (Python, Java, JavaScript, REST API) using a novel Abstract Syntax Tree (AST) evaluation method. The benchmark consists of over 2,000 question-function-answer pairs covering diverse application domains and complex use cases including multiple function calls, parallel function calls, and multi-turn interactions.

Paper

Progress Over Time

Interactive timeline showing model performance evolution on BFCL

State-of-the-art frontier
Open
Proprietary

BFCL Leaderboard

10 models • 0 verified
ContextCostLicense
1
405B128K
$0.89
$0.89
2
70B128K
$0.20
$0.20
3
8B131K
$0.03
$0.03
4
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
235B128K
$0.10
$0.10
5
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B128K
$0.10
$0.30
6
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
31B128K
$0.10
$0.30
7
Amazon
Amazon
300K
$0.80
$3.20
8
Amazon
Amazon
300K
$0.06
$0.24
9
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B
10
128K
$0.03
$0.14
Notice missing or incorrect data?

FAQ

Common questions about BFCL

The Berkeley Function Calling Leaderboard (BFCL) is the first comprehensive and executable function call evaluation dedicated to assessing Large Language Models' ability to invoke functions. It evaluates serial and parallel function calls across multiple programming languages (Python, Java, JavaScript, REST API) using a novel Abstract Syntax Tree (AST) evaluation method. The benchmark consists of over 2,000 question-function-answer pairs covering diverse application domains and complex use cases including multiple function calls, parallel function calls, and multi-turn interactions.
The BFCL paper is available at https://openreview.net/pdf?id=2GmDdhBdDk. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The BFCL leaderboard ranks 10 AI models based on their performance on this benchmark. Currently, Llama 3.1 405B Instruct by Meta leads with a score of 0.885. The average score across all models is 0.717.
The highest BFCL score is 0.885, achieved by Llama 3.1 405B Instruct from Meta.
10 models have been evaluated on the BFCL benchmark, with 0 verified results and 10 self-reported results.
BFCL is categorized under general, reasoning, and tool calling. The benchmark evaluates text models.