nielsr HF Staff commited on
Commit
44d4106
·
verified ·
1 Parent(s): 0e78e55

Improve model card: Add pipeline tag, library name, and explicit links

Browse files

This PR enhances the model card for FunReason-MT-4B by:

- Adding `pipeline_tag: text-generation` to improve discoverability on the Hugging Face Hub.
- Adding `library_name: transformers` as evidence from `config.json` (transformers_version) and the usage of `tokenizer.apply_chat_template` indicates compatibility with the Hugging Face Transformers library. This will enable the automated "how to use" widget.
- Updating the top badge section to include explicit links to the Hugging Face paper page ([https://huggingface.co/papers/2510.24645](https://huggingface.co/papers/2510.24645)), the GitHub repository ([https://github.com/inclusionAI/AWorld-RL](https://github.com/inclusionAI/AWorld-RL)), and the overarching project page ([https://github.com/inclusionAI/AWorld](https://github.com/inclusionAI/AWorld)).
- Updating the reference to "full usage" to point to the main GitHub repository rather than a specific pull request.

The existing descriptive content and usage example have been preserved without alteration, in line with the contribution guidelines.

Please review and merge these improvements.

Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -1,24 +1,24 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - Qwen/Qwen3-4B-Instruct-2507
5
- task_categories:
6
- - question-answering
7
- - text-generation
8
  language:
9
  - en
 
10
  tags:
11
  - agent
12
  - Agentic Learning
13
  - tool use
14
  - BFCL
 
 
 
 
 
15
  ---
16
 
17
-
18
  # FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling
19
 
20
- [![arXiv](https://img.shields.io/badge/arXiv-2510.24645-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2510.24645) [![Model](https://img.shields.io/badge/Hugging%20Face-Model-yellow?logo=huggingface)](https://huggingface.co/Bingguang/FunReason-MT) [![Dataset](https://img.shields.io/badge/Hugging%20Face-Dataset-yellow?logo=huggingface)](https://huggingface.co/datasets/Bingguang/FunReason-MT)
21
-
22
 
23
  ## Model Overview
24
 
@@ -30,9 +30,7 @@ FunReason-MT-4B achieves state-of-the-art results on the **Berkeley Function-Cal
30
  - **Size:** 4 Billion parameters
31
  - **Key Capability:** Advanced Multi-Turn Function Calling and Agentic Tool-Use
32
 
33
- The full usage of the model is in this [pull request](https://github.com/ShishirPatil/gorilla/pull/1229)
34
-
35
-
36
 
37
 
38
  ## 📊 Evaluation Results
@@ -195,8 +193,11 @@ class FunReasonMTHandler(OSSHandler):
195
  cleaned_response = model_response
196
  if "</think>" in model_response:
197
  parts = model_response.split("</think>")
198
- reasoning_content = parts[0].rstrip("\n").split("<think>")[-1].lstrip("\n")
199
- cleaned_response = parts[-1].lstrip("\n")
 
 
 
200
  else:
201
  cleaned_response = "response outputs too long or no slash think in response."
202
  print("cleaned_response: ", cleaned_response)
 
1
  ---
 
2
  base_model:
3
  - Qwen/Qwen3-4B-Instruct-2507
 
 
 
4
  language:
5
  - en
6
+ license: apache-2.0
7
  tags:
8
  - agent
9
  - Agentic Learning
10
  - tool use
11
  - BFCL
12
+ task_categories:
13
+ - question-answering
14
+ - text-generation
15
+ pipeline_tag: text-generation
16
+ library_name: transformers
17
  ---
18
 
 
19
  # FunReason-MT Technical Report: Overcoming the Complexity Barrier in Multi-Turn Function Calling
20
 
21
+ [![arXiv](https://img.shields.io/badge/arXiv-2510.24645-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2510.24645) [![Paper](https://img.shields.io/badge/Hugging%20Face-Paper-yellow?logo=huggingface)](https://huggingface.co/papers/2510.24645) [![Model](https://img.shields.io/badge/Hugging%20Face-Model-yellow?logo=huggingface)](https://huggingface.co/Bingguang/FunReason-MT) [![Dataset](https://img.shields.io/badge/Hugging%20Face-Dataset-yellow?logo=huggingface)](https://huggingface.co/datasets/Bingguang/FunReason-MT) [![GitHub](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/inclusionAI/AWorld-RL) [![Project Page](https://img.shields.io/badge/Project-AWorld-green)](https://github.com/inclusionAI/AWorld)
 
22
 
23
  ## Model Overview
24
 
 
30
  - **Size:** 4 Billion parameters
31
  - **Key Capability:** Advanced Multi-Turn Function Calling and Agentic Tool-Use
32
 
33
+ The full usage of the model is in the [AWorld-RL GitHub repository](https://github.com/inclusionAI/AWorld-RL).
 
 
34
 
35
 
36
  ## 📊 Evaluation Results
 
193
  cleaned_response = model_response
194
  if "</think>" in model_response:
195
  parts = model_response.split("</think>")
196
+ reasoning_content = parts[0].rstrip("
197
+ ").split("<think>")[-1].lstrip("
198
+ ")
199
+ cleaned_response = parts[-1].lstrip("
200
+ ")
201
  else:
202
  cleaned_response = "response outputs too long or no slash think in response."
203
  print("cleaned_response: ", cleaned_response)