Bingguang commited on
Commit
f879aa4
·
verified ·
1 Parent(s): 4a7aca5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -19,20 +19,19 @@ The integration of large language models (LLMs) with function calling has emerge
19
  ## Main Result
20
 
21
  <div align="center">
22
- <img src="https://github.com/BingguangHao/FunReason/blob/main/img/result.png?raw=true" width="60%" />
23
  </div>
24
 
25
  <div align="center">
26
- <img src="https://github.com/BingguangHao/FunReason/blob/main/img/code.png?raw=true" width="60%" />
27
  </div>
28
 
29
- Performance of SFT and FunReason models on HumanEval and MBPP (including their HumanEval+ and MBPP+) compared with Qwen2.5-Coder-7B-Inst.
30
 
31
  ## Usage Recommendations
32
 
33
  **We recommend adhering to the following configurations when utilizing the FunReason model, to achieve the expected performance:**
34
 
35
- 1. **Using the original BFCL system prompt and the chat templete of Qwen.**
36
  2. In the model handler, the delimiter of the answer is "\n", and the last string obtained by delimiting is taken as the answer
37
  3. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
38
 
 
19
  ## Main Result
20
 
21
  <div align="center">
22
+ <img src="https://github.com/BingguangHao/FunReason/blob/main/img/result.png?raw=true" width="80%" />
23
  </div>
24
 
25
  <div align="center">
26
+ <img src="https://github.com/BingguangHao/FunReason/blob/main/img/code.png?raw=true" width="80%" />
27
  </div>
28
 
 
29
 
30
  ## Usage Recommendations
31
 
32
  **We recommend adhering to the following configurations when utilizing the FunReason model, to achieve the expected performance:**
33
 
34
+ 1. **Use the original BFCL system prompt and the chat templete of Qwen.**
35
  2. In the model handler, the delimiter of the answer is "\n", and the last string obtained by delimiting is taken as the answer
36
  3. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
37