Safetensors
qwen2

Improve language tag

#2
by lbourdois - opened
Files changed (1) hide show
  1. README.md +174 -160
README.md CHANGED
@@ -1,161 +1,175 @@
1
- ---
2
- license: other
3
- license_name: qwen-research
4
- license_link: https://huggingface.co/MadeAgents/Hammer2.0-3b/blob/main/LICENSE
5
- datasets:
6
- - Salesforce/xlam-function-calling-60k
7
- - MadeAgents/xlam-irrelevance-7.5k
8
- base_model:
9
- - Qwen/Qwen2.5-3B-Instruct
10
- ---
11
- ## Introduction
12
- We're excited to release lightweight Hammer 2.0 models ([0.5B](https://huggingface.co/MadeAgents/Hammer2.0-0.5b) , [1.5B](https://huggingface.co/MadeAgents/Hammer2.0-1.5b) , [3B](https://huggingface.co/MadeAgents/Hammer2.0-3b) , and [7B](https://huggingface.co/MadeAgents/Hammer2.0-7b)) with strong function calling capability, which empower developers to build personalized, on-device agentic applications.
13
-
14
- ## Model Details
15
- Hammer2.0 finetuned based on [Qwen 2.5 series](https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e) and [Qwen 2.5 coder series](https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f) using function masking techniques. It's trained using the [APIGen Function Calling Datasets](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) containing 60,000 samples, supplemented by [xlam-irrelevance-7.5k](https://huggingface.co/datasets/MadeAgents/xlam-irrelevance-7.5k) we generated. Hammer2.0 has achieved exceptional performances across numerous function calling benchmarks. For more details, please refer to [Hammer: Robust Function-Calling for On-Device Language Models via Function Masking](https://arxiv.org/abs/2410.04587) and [Hammer GitHub repository](https://github.com/MadeAgents/Hammer) .
16
-
17
- ## Evaluation
18
- The evaluation results of Hammer 2.0 models on the Berkeley Function-Calling Leaderboard (BFCL-v3) are presented in the following table:
19
- <div style="text-align: center;">
20
- <img src="v2_figures/bfcl.PNG" alt="overview" width="1000" style="margin: auto;">
21
- </div>
22
-
23
- Our Hammer 2.0 series consistently achieves corresponding best performance at comparable scales. The 7B model outperforms most function calling enchanced models, and the 1.5B model also achieves unexpected performance.
24
-
25
- In addition, we evaluated the Hammer 2.0 models on other academic benchmarks to further demonstrate the generalization ability of our models.
26
-
27
- <div style="text-align: center;">
28
- <img src="v2_figures/others-v2.PNG" alt="overview" width="1000" style="margin: auto;">
29
- </div>
30
-
31
- Hammer 2.0 models showcase highly stable performance, suggesting the robustness of Hammer 2.0 series. In contrast, the baseline approaches display varying levels of effectiveness.
32
-
33
- ## Requiements
34
- The code of Hammer 2.0 models have been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
35
-
36
- ## How to Use
37
- This is a simple example of how to use our model.
38
- ~~~python
39
- import json
40
- import torch
41
- from transformers import AutoModelForCausalLM, AutoTokenizer
42
-
43
- model_name = "MadeAgents/Hammer2.0-3b"
44
- model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
45
- tokenizer = AutoTokenizer.from_pretrained(model_name)
46
-
47
- # Please use our provided instruction prompt for best performance
48
- TASK_INSTRUCTION = """You are a tool calling assistant. In order to complete the user's request, you need to select one or more appropriate tools from the following tools and fill in the correct values for the tool parameters. Your specific tasks are:
49
- 1. Make one or more function/tool calls to meet the request based on the question.
50
- 2. If none of the function can be used, point it out and refuse to answer.
51
- 3. If the given question lacks the parameters required by the function, also point it out.
52
- """
53
-
54
- FORMAT_INSTRUCTION = """
55
- The output MUST strictly adhere to the following JSON format, and NO other text MUST be included.
56
- The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please directly output an empty list '[]'
57
- ```
58
- [
59
- {"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},
60
- ... (more tool calls as required)
61
- ]
62
- ```
63
- """
64
-
65
- # Define the input query and available tools
66
- query = "Where can I find live giveaways for beta access and games? And what's the weather like in New York, US?"
67
-
68
- live_giveaways_by_type = {
69
- "name": "live_giveaways_by_type",
70
- "description": "Retrieve live giveaways from the GamerPower API based on the specified type.",
71
- "parameters": {
72
- "type": "object",
73
- "properties": {
74
- "type": {
75
- "type": "string",
76
- "description": "The type of giveaways to retrieve (e.g., game, loot, beta).",
77
- "default": "game"
78
- }
79
- },
80
- "required": ["type"]
81
- }
82
- }
83
- get_current_weather={
84
- "name": "get_current_weather",
85
- "description": "Get the current weather",
86
- "parameters": {
87
- "type": "object",
88
- "properties": {
89
- "location": {
90
- "type": "string",
91
- "description": "The city and state, e.g. San Francisco, CA"
92
- }
93
- },
94
- "required": ["location"]
95
- }
96
- }
97
- get_stock_price={
98
- "name": "get_stock_price",
99
- "description": "Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.",
100
- "parameters": {
101
- "type": "object",
102
- "properties": {
103
- "ticker": {
104
- "type": "string",
105
- "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
106
- }
107
- },
108
- "required": ["ticker"]
109
- }
110
- }
111
-
112
- def convert_to_format_tool(tools):
113
- ''''''
114
- if isinstance(tools, dict):
115
- format_tools = {
116
- "name": tools["name"],
117
- "description": tools["description"],
118
- "parameters": tools["parameters"].get("properties", {}),
119
- }
120
- required = tools["parameters"].get("required", [])
121
- for param in required:
122
- format_tools["parameters"][param]["required"] = True
123
- for param in format_tools["parameters"].keys():
124
- if "default" in format_tools["parameters"][param]:
125
- default = format_tools["parameters"][param]["default"]
126
- format_tools["parameters"][param]["description"]+=f"default is \'{default}\'"
127
- return format_tools
128
- elif isinstance(tools, list):
129
- return [convert_to_format_tool(tool) for tool in tools]
130
- else:
131
- return tools
132
- # Helper function to build the input prompt for our model
133
- def build_prompt(task_instruction: str, format_instruction: str, tools: list, query: str):
134
- prompt = f"[BEGIN OF TASK INSTRUCTION]\n{task_instruction}\n[END OF TASK INSTRUCTION]\n\n"
135
- prompt += f"[BEGIN OF AVAILABLE TOOLS]\n{json.dumps(tools)}\n[END OF AVAILABLE TOOLS]\n\n"
136
- prompt += f"[BEGIN OF FORMAT INSTRUCTION]\n{format_instruction}\n[END OF FORMAT INSTRUCTION]\n\n"
137
- prompt += f"[BEGIN OF QUERY]\n{query}\n[END OF QUERY]\n\n"
138
- return prompt
139
-
140
- # Build the input and start the inference
141
- openai_format_tools = [live_giveaways_by_type, get_current_weather,get_stock_price]
142
- format_tools = convert_to_format_tool(openai_format_tools)
143
- content = build_prompt(TASK_INSTRUCTION, FORMAT_INSTRUCTION, format_tools, query)
144
-
145
- messages=[
146
- { 'role': 'user', 'content': content}
147
- ]
148
- inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
149
-
150
- # tokenizer.eos_token_id is the id of <|EOT|> token
151
- outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
152
- print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
153
- ~~~
154
-
155
-
156
- ## License Information
157
-
158
- This model is subject to two different licenses:
159
-
160
- 1. **Base Model (Qwen)**: The base model is licensed under the [Qwen Research License](https://huggingface.co/MadeAgents/Hammer2.0-3b/blob/main/LICENSE). It is intended for non-commercial use only.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
  2. **Fine-tuning and Modifications**: The fine-tuning data and modifications are licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/), allowing for sharing and adaptation with proper attribution.
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/MadeAgents/Hammer2.0-3b/blob/main/LICENSE
5
+ datasets:
6
+ - Salesforce/xlam-function-calling-60k
7
+ - MadeAgents/xlam-irrelevance-7.5k
8
+ base_model:
9
+ - Qwen/Qwen2.5-3B-Instruct
10
+ language:
11
+ - zho
12
+ - eng
13
+ - fra
14
+ - spa
15
+ - por
16
+ - deu
17
+ - ita
18
+ - rus
19
+ - jpn
20
+ - kor
21
+ - vie
22
+ - tha
23
+ - ara
24
+ ---
25
+ ## Introduction
26
+ We're excited to release lightweight Hammer 2.0 models ([0.5B](https://huggingface.co/MadeAgents/Hammer2.0-0.5b) , [1.5B](https://huggingface.co/MadeAgents/Hammer2.0-1.5b) , [3B](https://huggingface.co/MadeAgents/Hammer2.0-3b) , and [7B](https://huggingface.co/MadeAgents/Hammer2.0-7b)) with strong function calling capability, which empower developers to build personalized, on-device agentic applications.
27
+
28
+ ## Model Details
29
+ Hammer2.0 finetuned based on [Qwen 2.5 series](https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e) and [Qwen 2.5 coder series](https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f) using function masking techniques. It's trained using the [APIGen Function Calling Datasets](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) containing 60,000 samples, supplemented by [xlam-irrelevance-7.5k](https://huggingface.co/datasets/MadeAgents/xlam-irrelevance-7.5k) we generated. Hammer2.0 has achieved exceptional performances across numerous function calling benchmarks. For more details, please refer to [Hammer: Robust Function-Calling for On-Device Language Models via Function Masking](https://arxiv.org/abs/2410.04587) and [Hammer GitHub repository](https://github.com/MadeAgents/Hammer) .
30
+
31
+ ## Evaluation
32
+ The evaluation results of Hammer 2.0 models on the Berkeley Function-Calling Leaderboard (BFCL-v3) are presented in the following table:
33
+ <div style="text-align: center;">
34
+ <img src="v2_figures/bfcl.PNG" alt="overview" width="1000" style="margin: auto;">
35
+ </div>
36
+
37
+ Our Hammer 2.0 series consistently achieves corresponding best performance at comparable scales. The 7B model outperforms most function calling enchanced models, and the 1.5B model also achieves unexpected performance.
38
+
39
+ In addition, we evaluated the Hammer 2.0 models on other academic benchmarks to further demonstrate the generalization ability of our models.
40
+
41
+ <div style="text-align: center;">
42
+ <img src="v2_figures/others-v2.PNG" alt="overview" width="1000" style="margin: auto;">
43
+ </div>
44
+
45
+ Hammer 2.0 models showcase highly stable performance, suggesting the robustness of Hammer 2.0 series. In contrast, the baseline approaches display varying levels of effectiveness.
46
+
47
+ ## Requiements
48
+ The code of Hammer 2.0 models have been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
49
+
50
+ ## How to Use
51
+ This is a simple example of how to use our model.
52
+ ~~~python
53
+ import json
54
+ import torch
55
+ from transformers import AutoModelForCausalLM, AutoTokenizer
56
+
57
+ model_name = "MadeAgents/Hammer2.0-3b"
58
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
59
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
60
+
61
+ # Please use our provided instruction prompt for best performance
62
+ TASK_INSTRUCTION = """You are a tool calling assistant. In order to complete the user's request, you need to select one or more appropriate tools from the following tools and fill in the correct values for the tool parameters. Your specific tasks are:
63
+ 1. Make one or more function/tool calls to meet the request based on the question.
64
+ 2. If none of the function can be used, point it out and refuse to answer.
65
+ 3. If the given question lacks the parameters required by the function, also point it out.
66
+ """
67
+
68
+ FORMAT_INSTRUCTION = """
69
+ The output MUST strictly adhere to the following JSON format, and NO other text MUST be included.
70
+ The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please directly output an empty list '[]'
71
+ ```
72
+ [
73
+ {"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},
74
+ ... (more tool calls as required)
75
+ ]
76
+ ```
77
+ """
78
+
79
+ # Define the input query and available tools
80
+ query = "Where can I find live giveaways for beta access and games? And what's the weather like in New York, US?"
81
+
82
+ live_giveaways_by_type = {
83
+ "name": "live_giveaways_by_type",
84
+ "description": "Retrieve live giveaways from the GamerPower API based on the specified type.",
85
+ "parameters": {
86
+ "type": "object",
87
+ "properties": {
88
+ "type": {
89
+ "type": "string",
90
+ "description": "The type of giveaways to retrieve (e.g., game, loot, beta).",
91
+ "default": "game"
92
+ }
93
+ },
94
+ "required": ["type"]
95
+ }
96
+ }
97
+ get_current_weather={
98
+ "name": "get_current_weather",
99
+ "description": "Get the current weather",
100
+ "parameters": {
101
+ "type": "object",
102
+ "properties": {
103
+ "location": {
104
+ "type": "string",
105
+ "description": "The city and state, e.g. San Francisco, CA"
106
+ }
107
+ },
108
+ "required": ["location"]
109
+ }
110
+ }
111
+ get_stock_price={
112
+ "name": "get_stock_price",
113
+ "description": "Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.",
114
+ "parameters": {
115
+ "type": "object",
116
+ "properties": {
117
+ "ticker": {
118
+ "type": "string",
119
+ "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
120
+ }
121
+ },
122
+ "required": ["ticker"]
123
+ }
124
+ }
125
+
126
+ def convert_to_format_tool(tools):
127
+ ''''''
128
+ if isinstance(tools, dict):
129
+ format_tools = {
130
+ "name": tools["name"],
131
+ "description": tools["description"],
132
+ "parameters": tools["parameters"].get("properties", {}),
133
+ }
134
+ required = tools["parameters"].get("required", [])
135
+ for param in required:
136
+ format_tools["parameters"][param]["required"] = True
137
+ for param in format_tools["parameters"].keys():
138
+ if "default" in format_tools["parameters"][param]:
139
+ default = format_tools["parameters"][param]["default"]
140
+ format_tools["parameters"][param]["description"]+=f"default is \'{default}\'"
141
+ return format_tools
142
+ elif isinstance(tools, list):
143
+ return [convert_to_format_tool(tool) for tool in tools]
144
+ else:
145
+ return tools
146
+ # Helper function to build the input prompt for our model
147
+ def build_prompt(task_instruction: str, format_instruction: str, tools: list, query: str):
148
+ prompt = f"[BEGIN OF TASK INSTRUCTION]\n{task_instruction}\n[END OF TASK INSTRUCTION]\n\n"
149
+ prompt += f"[BEGIN OF AVAILABLE TOOLS]\n{json.dumps(tools)}\n[END OF AVAILABLE TOOLS]\n\n"
150
+ prompt += f"[BEGIN OF FORMAT INSTRUCTION]\n{format_instruction}\n[END OF FORMAT INSTRUCTION]\n\n"
151
+ prompt += f"[BEGIN OF QUERY]\n{query}\n[END OF QUERY]\n\n"
152
+ return prompt
153
+
154
+ # Build the input and start the inference
155
+ openai_format_tools = [live_giveaways_by_type, get_current_weather,get_stock_price]
156
+ format_tools = convert_to_format_tool(openai_format_tools)
157
+ content = build_prompt(TASK_INSTRUCTION, FORMAT_INSTRUCTION, format_tools, query)
158
+
159
+ messages=[
160
+ { 'role': 'user', 'content': content}
161
+ ]
162
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
163
+
164
+ # tokenizer.eos_token_id is the id of <|EOT|> token
165
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
166
+ print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
167
+ ~~~
168
+
169
+
170
+ ## License Information
171
+
172
+ This model is subject to two different licenses:
173
+
174
+ 1. **Base Model (Qwen)**: The base model is licensed under the [Qwen Research License](https://huggingface.co/MadeAgents/Hammer2.0-3b/blob/main/LICENSE). It is intended for non-commercial use only.
175
  2. **Fine-tuning and Modifications**: The fine-tuning data and modifications are licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/), allowing for sharing and adaptation with proper attribution.