Bingguang commited on
Commit
78c9951
·
verified ·
1 Parent(s): 8820e28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2.5-Coder-7B-Instruct
5
+ ---
6
+ # FunReason: Enhancing Large Language Models' Function Calling via Self-Refinement Multiscale Loss and Automated Data Refinement
7
+
8
+ <p align="center">
9
+ &nbsp&nbsp📊 <a href="https://huggingface.co/Bingguang/FunReason">Dataset(coming)</a>&nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/Bingguang/FunReason">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://huggingface.co/Bingguang/FunReason">Paper(Coming)</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://huggingface.co/Bingguang/FunReason">Blog(coming)</a> &nbsp&nbsp | &nbsp&nbsp📖 <a href="https://github.com/BingguangHao/FunReason">Github</a>
10
+ </p>
11
+
12
+ > [!IMPORTANT]
13
+ > - **We will release all the code, training dataset and model weight, waiting the confidential review of Ant Group.**
14
+
15
+ ## Abstract
16
+
17
+ The integration of large language models (LLMs) with function calling has emerged as a crucial capability for enhancing their practical utility in real-world applications. However, effectively combining reasoning processes with accurate function execution remains a significant challenge. Traditional training approaches often struggle to balance the detailed reasoning steps with the precision of function calls, leading to suboptimal performance. To address these limitations, we introduce FunReason, a novel framework that enhances LLMs' function calling capabilities through an automated data refinement strategy and a Self-Refinement Multiscale Loss (SRML) approach. FunReason leverages LLMs' natural reasoning abilities to generate high-quality training examples, focusing on query parseability, reasoning coherence, and function call precision. The SRML approach dynamically balances the contribution of reasoning processes and function call accuracy during training, addressing the inherent trade-off between these two critical aspects. FunReason achieves performance comparable to GPT-4o while effectively mitigating catastrophic forgetting during fine-tuning. FunReason provides a comprehensive solution for enhancing LLMs' function calling capabilities by introducing a balanced training methodology and a data refinement pipeline. For code and dataset, please refer to our repository at GitHub.
18
+
19
+ ## Main Result
20
+
21
+ <div align="center">
22
+ <img src="https://github.com/BingguangHao/FunReason/blob/main/img/result.png?raw=true" width="60%" />
23
+ </div>
24
+
25
+ <div align="center">
26
+ <img src="https://github.com/BingguangHao/FunReason/blob/main/img/code.png?raw=true" width="60%" />
27
+ </div>
28
+
29
+ Performance of SFT and FunReason models on HumanEval and MBPP (including their HumanEval+ and MBPP+) compared with Qwen2.5-Coder-7B-Inst.
30
+
31
+ ## Usage Recommendations
32
+
33
+ **We recommend adhering to the following configurations when utilizing the FunReason model, to achieve the expected performance:**
34
+
35
+ 1. **Using the original BFCL system prompt.**
36
+ 2. In the model handler, the delimiter of the answer is "\n", and the last string obtained by delimiting is taken as the answer
37
+ 3. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
38
+
39
+ ## Citation
40
+ ```md
41
+ @article{FunReason,
42
+ title={FunReason: Enhancing Large Language Models' Function Calling via Self-Refinement Multiscale Loss and Automated Data Refinement},
43
+ author={Bingguang Hao, Maolin Wang, Zengzhuang Xu, Cunyin Peng, Yicheng Chen, Xiangyu Zhao, Jinjie Gu, Chenyi Zhuang}
44
+ }
45
+ ```