File size: 10,543 Bytes
d85d724
 
 
c1387ef
 
532a4ec
c1387ef
 
 
 
 
532a4ec
 
 
 
 
c1387ef
 
785b1c8
c1387ef
532a4ec
c1387ef
 
 
 
 
8b82b4d
c1387ef
 
 
 
 
f6773ea
c1387ef
 
 
 
 
 
 
 
 
 
 
785b1c8
c1387ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
289fcc9
c1387ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
532a4ec
 
 
 
 
c1387ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9efc216
c1387ef
 
785b1c8
 
163f267
c1387ef
 
 
 
9efc216
c1387ef
9efc216
c1387ef
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
---
base_model:
- Qwen/Qwen3-4B-Instruct-2507
language:
- en
license: apache-2.0
tags:
- agent
- Agentic Learning
- tool use
- BFCL
task_categories:
- question-answering
- text-generation
pipeline_tag: text-generation
library_name: transformers
---

# FunReason-MT Technical Report: Advanced Data Synthesis Solution for Real-world Multi-Turn Tool-use

[![arXiv](https://img.shields.io/badge/arXiv-2510.24645-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2510.24645) [![Paper](https://img.shields.io/badge/Hugging%20Face-Paper-yellow?logo=huggingface)](https://huggingface.co/papers/2510.24645) [![Model](https://img.shields.io/badge/Hugging%20Face-Model-yellow?logo=huggingface)](https://huggingface.co/Bingguang/FunReason-MT) [![Dataset](https://img.shields.io/badge/Hugging%20Face-Dataset-yellow?logo=huggingface)](https://huggingface.co/datasets/Bingguang/FunReason-MT) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/inclusionAI/AWorld-RL) [![Project Page](https://img.shields.io/badge/Project-AWorld-green)](https://github.com/inclusionAI/AWorld)

## Model Overview

The **FunReason-MT-4B** model is a high-performance **Large Language Model (LLM)** fine-tuned for complex, multi-turn **Function Calling (FC)** and agentic tool-use tasks. Built upon the **Qwen3-4B-Instruct-2507** base model , it has been trained using the novel **FunReason-MT data synthesis framework**.

FunReason-MT-4B achieves ssuperior results on the **Berkeley Function-Calling Leaderboard (BFCLv3)** Multi-Turn and Agentic Evaluation benchmarks. This performance demonstrates that high-quality, synthesized data can effectively overcome the complexity barrier in multi-turn FC data generation.

  - **Base Model:** Qwen3-4B-Instruct-2507 
  - **Size:** 4 Billion parameters
  - **Key Capability:** Advanced Multi-Turn Function Calling and Agentic Tool-Use 

The full usage of the model is in our [BFCL PR](https://github.com/ShishirPatil/gorilla/pull/1229).


## 📊 Evaluation Results

The model was rigorously evaluated on the Berkeley Function-Calling Leaderboard (BFCL).

### BFCLv3 Multi-Turn and Single-Turn Performance

| Model (4B - 235B)                      |             Multi-Turn (Overall)             |            Single-Turn (Overall)            |
| :------------------------------------- | :------------------------------------------: | :------------------------------------------: |
| Qwen3-4B-Instruct (Base)               |        15.75         |        78.19         |
| **Qwen3-4B + FunReason-MT (RL)** | **57.75**  | **85.47**  |
| Claude-Sonnet-4-20250514               |        54.75         |        84.72         |
| DeepSeek-R1-0528                       |        44.50         |        78.22         |
| GPT-4o-2024-11-20                      |        42.50         |        77.21         |

### BFCL Agentic Evaluation (BFCLv4 OOD)

The FunReason-MT trained model leads in out-of-distribution agentic tasks (Web Search and Memory).

| Model                          |             BFCLv4 Overall Score             |
| :----------------------------- | :------------------------------------------: |
| **FunReason-MT-4B (RL)** | **15.10**  |
| ToolACE-2-8B                   |      14.83       |
| BitAgent-8B                    |      8.24       |
| XLAM-2-3b-fc-r                 |      7.42       |
| watt-tool-8B                   |    6.30     |


-----

## 💻 Training Data and Framework

### FunReason-MT Dataset

The training set comprises **16,000 high-quality multi-turn samples**. This dataset was generated using the three-phase FunReason-MT data synthesis framework, which focuses on generating complex trajectories that require:

1.  **Environment-API Graph Interactions** for collecting goal-directed, correct execution traces.
2.  **Advanced Tool-Query Synthesis** for creating logical-jump queries that abstract multi-step actions.
3.  **Guided Iterative Chain** for enforcing reliable, consistent Chain-of-Thought (CoT) generation using self-correction.

### Training Details

The model was fine-tuned with function calling data from APIGen and the FunReason-MT dataset.

  - **Training Libraries:** LLama-Factory and Verl.
  - **Methodology:** Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL).
  - **Hardware:** Conducted on 32 NVIDIA H20 GPUs.

### Usage
Here we provide a code snippet of the handler of FunReason-MT.
```python

class FunReasonMTHandler(OSSHandler):
    def __init__(self, model_name, temperature) -> None:
        super().__init__(model_name, temperature)
        self.is_fc_model = False
        self.top_p = 0.7
        self.max_output_len = 20000
        self.max_context_length = 247000

    @override
    def _query_prompting(self, inference_data: dict):
        print("overide _query_prompting")
        # We use the OpenAI Completions API
        function: list[dict] = inference_data["function"]
        message: list[dict] = inference_data["message"]

        formatted_prompt: str = self._format_prompt(message, function)
        inference_data["inference_input_log"] = {"formatted_prompt": formatted_prompt}

        # Tokenize the formatted prompt to get token count
        input_token_count = len(self.tokenizer.tokenize(formatted_prompt))

        # Determine the number of tokens to request. Cap it at 4096 if the model has a larger limit.
        if self.max_context_length < input_token_count + 2:
            # If the prompt is already at the max length, just request 1000 token, we will get an error anyway
            leftover_tokens_count = 1000
        else:
            leftover_tokens_count = min(
                self.max_output_len,
                self.max_context_length - input_token_count - 2,
            )

        extra_body = {}
        if hasattr(self, "stop_token_ids"):
            extra_body["stop_token_ids"] = self.stop_token_ids
        if hasattr(self, "skip_special_tokens"):
            extra_body["skip_special_tokens"] = self.skip_special_tokens

        start_time = time.time()
        if len(extra_body) > 0:
            api_response = self.client.completions.create(
                model=self.model_path_or_id,
                temperature=self.temperature,
                top_p=self.top_p,
                prompt=formatted_prompt,
                max_tokens=leftover_tokens_count,
                extra_body=extra_body,
                timeout=72000,  # Avoid timeout errors
            )
        else:
            api_response = self.client.completions.create(
                model=self.model_path_or_id,
                temperature=self.temperature,
                top_p=self.top_p,
                prompt=formatted_prompt,
                max_tokens=leftover_tokens_count,
                timeout=72000,  # Avoid timeout errors
            )
        end_time = time.time()

        return api_response, end_time - start_time

    def _process_tool_response(self, tool_response_lst):
        processed_tool_response = []
        for tool_response in tool_response_lst:
            processed_tool_response.append(tool_response)
        return processed_tool_response

    @override
    def _format_prompt(self, messages, function):
        new_messages = []
        tool_content = []
        for idx, message in enumerate(messages):
            role = message["role"]
            content = message["content"]
            if role != "tool":
                if len(tool_content) != 0:
                    tool_message = {
                        "role": "tool",
                        "content": str(tool_content),
                    }
                    new_messages.append(tool_message)
                    tool_content = []
                new_messages.append(message)
            else:
                tool_content.append(content)
        if len(tool_content) != 0:
            tool_message = {
                "role": "tool",
                "content": str(tool_content),
            }
            new_messages.append(tool_message)
            tool_content = []
        print("new_messages", new_messages)
        formatted_prompt = self.tokenizer.apply_chat_template(
            new_messages, tokenize=False, add_generation_prompt=True
        )
        formatted_prompt += "<think>"
        print("formated_prompt", formatted_prompt)
        return formatted_prompt

    @override
    def _parse_query_response_prompting(self, api_response: Any) -> dict:
        reasoning_content = ""
        model_response = api_response.choices[0].text
        cleaned_response = ""
        reasoning_content = ""
        cleaned_response = model_response
        if "</think>" in model_response:
            parts = model_response.split("</think>")
            reasoning_content = parts[0].rstrip("
").split("<think>")[-1].lstrip("
")
            cleaned_response = parts[-1].lstrip("
")
        else:
            cleaned_response = "response outputs too long or no slash think in response."
        print("cleaned_response: ", cleaned_response)
        response_data = {
            "model_responses": cleaned_response,
            "model_responses_message_for_chat_history": {
                "role": "assistant",
                "content": cleaned_response,
            },
            "reasoning_content": reasoning_content,
            "input_token": api_response.usage.prompt_tokens,
            "output_token": api_response.usage.completion_tokens,
        }

        # Attach reasoning content to the assistant message for the next turn if present
        if reasoning_content:
            response_data["model_responses_message_for_chat_history"][
                "reasoning_content"
            ] = reasoning_content

        if not reasoning_content:
            del response_data["reasoning_content"]

        return response_data
```
-----

## 🔗 Related Projects and Citation

This work is part of the open-source project **[AWorld, InclusionAI](https://github.com/inclusionAI/AWorld/)**.

If you use FunReason-MT in your research, please cite the technical report:

```
@article{xu2025funreason,
  title={FunReason-MT Technical Report: Advanced Data Synthesis Solution for Real-world Multi-Turn Tool-use},
  author={Zengzhuang Xu, Bingguang Hao, Zechuan Wang, Yuntao Wen, Xinyi Xu, Yang Liu, Long Chen, Dong Wang, Maolin Wang, Tong Zhao, Yicheng Chen, Cunyin Peng, Jinjie Gu, Leilei Gan, Xiangyu Zhao, Chenyi Zhuang, Shi Gu},
  journal={arXiv preprint arXiv:2510.24645},
  year={2025}
}
```
### Contact

For inquiries, please contact:

* `bingguanghao7@gmail.com`