Improve model card: Add pipeline tag, library name, and explicit links
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,24 +1,24 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
base_model:
|
| 4 |
- Qwen/Qwen3-4B-Instruct-2507
|
| 5 |
-
task_categories:
|
| 6 |
-
- question-answering
|
| 7 |
-
- text-generation
|
| 8 |
language:
|
| 9 |
- en
|
|
|
|
| 10 |
tags:
|
| 11 |
- agent
|
| 12 |
- Agentic Learning
|
| 13 |
- tool use
|
| 14 |
- BFCL
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
---
|
| 16 |
|
| 17 |
-
|
| 18 |
# FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling
|
| 19 |
|
| 20 |
-
[](https://arxiv.org/abs/2510.24645) [](https://huggingface.co/Bingguang/FunReason-MT) [](https://huggingface.co/datasets/Bingguang/FunReason-MT)
|
| 21 |
-
|
| 22 |
|
| 23 |
## Model Overview
|
| 24 |
|
|
@@ -30,9 +30,7 @@ FunReason-MT-4B achieves state-of-the-art results on the **Berkeley Function-Cal
|
|
| 30 |
- **Size:** 4 Billion parameters
|
| 31 |
- **Key Capability:** Advanced Multi-Turn Function Calling and Agentic Tool-Use
|
| 32 |
|
| 33 |
-
The full usage of the model is in
|
| 34 |
-
|
| 35 |
-
|
| 36 |
|
| 37 |
|
| 38 |
## 📊 Evaluation Results
|
|
@@ -195,8 +193,11 @@ class FunReasonMTHandler(OSSHandler):
|
|
| 195 |
cleaned_response = model_response
|
| 196 |
if "</think>" in model_response:
|
| 197 |
parts = model_response.split("</think>")
|
| 198 |
-
reasoning_content = parts[0].rstrip("
|
| 199 |
-
|
|
|
|
|
|
|
|
|
|
| 200 |
else:
|
| 201 |
cleaned_response = "response outputs too long or no slash think in response."
|
| 202 |
print("cleaned_response: ", cleaned_response)
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
base_model:
|
| 3 |
- Qwen/Qwen3-4B-Instruct-2507
|
|
|
|
|
|
|
|
|
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
license: apache-2.0
|
| 7 |
tags:
|
| 8 |
- agent
|
| 9 |
- Agentic Learning
|
| 10 |
- tool use
|
| 11 |
- BFCL
|
| 12 |
+
task_categories:
|
| 13 |
+
- question-answering
|
| 14 |
+
- text-generation
|
| 15 |
+
pipeline_tag: text-generation
|
| 16 |
+
library_name: transformers
|
| 17 |
---
|
| 18 |
|
|
|
|
| 19 |
# FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling
|
| 20 |
|
| 21 |
+
[](https://arxiv.org/abs/2510.24645) [](https://huggingface.co/papers/2510.24645) [](https://huggingface.co/Bingguang/FunReason-MT) [](https://huggingface.co/datasets/Bingguang/FunReason-MT) [](https://github.com/inclusionAI/AWorld-RL) [](https://github.com/inclusionAI/AWorld)
|
|
|
|
| 22 |
|
| 23 |
## Model Overview
|
| 24 |
|
|
|
|
| 30 |
- **Size:** 4 Billion parameters
|
| 31 |
- **Key Capability:** Advanced Multi-Turn Function Calling and Agentic Tool-Use
|
| 32 |
|
| 33 |
+
The full usage of the model is in the [AWorld-RL GitHub repository](https://github.com/inclusionAI/AWorld-RL).
|
|
|
|
|
|
|
| 34 |
|
| 35 |
|
| 36 |
## 📊 Evaluation Results
|
|
|
|
| 193 |
cleaned_response = model_response
|
| 194 |
if "</think>" in model_response:
|
| 195 |
parts = model_response.split("</think>")
|
| 196 |
+
reasoning_content = parts[0].rstrip("
|
| 197 |
+
").split("<think>")[-1].lstrip("
|
| 198 |
+
")
|
| 199 |
+
cleaned_response = parts[-1].lstrip("
|
| 200 |
+
")
|
| 201 |
else:
|
| 202 |
cleaned_response = "response outputs too long or no slash think in response."
|
| 203 |
print("cleaned_response: ", cleaned_response)
|