Bingguang commited on
Commit
8b82b4d
·
verified ·
1 Parent(s): f6773ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ library_name: transformers
24
 
25
  The **FunReason-MT-4B** model is a high-performance **Large Language Model (LLM)** fine-tuned for complex, multi-turn **Function Calling (FC)** and agentic tool-use tasks. Built upon the **Qwen3-4B-Instruct-2507** base model , it has been trained using the novel **FunReason-MT data synthesis framework**.
26
 
27
- FunReason-MT-4B achieves state-of-the-art results on the **Berkeley Function-Calling Leaderboard (BFCLv3)** Multi-Turn and Agentic Evaluation benchmarks. This performance demonstrates that high-quality, synthesized data can effectively overcome the complexity barrier in multi-turn FC data generation.
28
 
29
  - **Base Model:** Qwen3-4B-Instruct-2507
30
  - **Size:** 4 Billion parameters
 
24
 
25
  The **FunReason-MT-4B** model is a high-performance **Large Language Model (LLM)** fine-tuned for complex, multi-turn **Function Calling (FC)** and agentic tool-use tasks. Built upon the **Qwen3-4B-Instruct-2507** base model , it has been trained using the novel **FunReason-MT data synthesis framework**.
26
 
27
+ FunReason-MT-4B achieves ssuperior results on the **Berkeley Function-Calling Leaderboard (BFCLv3)** Multi-Turn and Agentic Evaluation benchmarks. This performance demonstrates that high-quality, synthesized data can effectively overcome the complexity barrier in multi-turn FC data generation.
28
 
29
  - **Base Model:** Qwen3-4B-Instruct-2507
30
  - **Size:** 4 Billion parameters