BFCL-v3

Berkeley Function Calling Leaderboard v3 (BFCL-v3) is an advanced benchmark that evaluates large language models' function calling capabilities through multi-turn and multi-step interactions. It introduces extended conversational exchanges where models must retain contextual information across turns and execute multiple internal function calls for complex user requests. The benchmark includes 1000 test cases across domains like vehicle control, trading bots, travel booking, and file system management, using state-based evaluation to verify both system state changes and execution path correctness.

Paper

Progress Over Time

Interactive timeline showing model performance evolution on BFCL-v3

State-of-the-art frontier
Open
Proprietary

BFCL-v3 Leaderboard

18 models • 0 verified
ContextCostLicense
1
Zhipu AI
Zhipu AI
355B131K
$0.40
$1.60
2
Zhipu AI
Zhipu AI
106B
3
560B128K
$0.30
$1.20
4
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
80B66K
$0.15
$1.50
5
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K
$0.45
$3.49
5
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
235B262K
$0.30
$3.00
7
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B
8
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
235B262K
$0.15
$0.80
9
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
80B66K
$0.15
$1.50
10
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B
11
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
480B
12
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
31B262K
$0.20
$1.00
13
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K
$0.30
$1.49
14
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B262K
$0.10
$1.00
15
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B262K
$0.08
$0.50
15
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
31B262K
$0.20
$0.70
17
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B262K
$0.10
$0.60
18
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B262K
$0.18
$2.09
Notice missing or incorrect data?

FAQ

Common questions about BFCL-v3

Berkeley Function Calling Leaderboard v3 (BFCL-v3) is an advanced benchmark that evaluates large language models' function calling capabilities through multi-turn and multi-step interactions. It introduces extended conversational exchanges where models must retain contextual information across turns and execute multiple internal function calls for complex user requests. The benchmark includes 1000 test cases across domains like vehicle control, trading bots, travel booking, and file system management, using state-based evaluation to verify both system state changes and execution path correctness.
The BFCL-v3 paper is available at https://openreview.net/forum?id=2GmDdhBdDk. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The BFCL-v3 leaderboard ranks 18 AI models based on their performance on this benchmark. Currently, GLM-4.5 by Zhipu AI leads with a score of 0.778. The average score across all models is 0.699.
The highest BFCL-v3 score is 0.778, achieved by GLM-4.5 from Zhipu AI.
18 models have been evaluated on the BFCL-v3 benchmark, with 0 verified results and 18 self-reported results.
BFCL-v3 is categorized under agents, finance, general, reasoning, structured output, and tool calling. The benchmark evaluates text models.