Jessylg27 commited on
Commit
5b30bec
·
verified ·
1 Parent(s): 7651c3b

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,78 +1,56 @@
1
  ---
2
- base_model:
3
- - Qwen/Qwen2.5-Coder-32B-Instruct
4
  library_name: peft
5
- license: cc-by-nc-4.0
6
- datasets:
7
- - Jessylg27/DeepThink-Code-Lite
8
- language:
9
- - en
10
- - fr
11
  tags:
12
- - code
13
- - logic
14
- - reasoning
15
- - qwen2.5
16
- - unsloth
17
  - sft
 
18
  - trl
 
 
 
19
  ---
20
 
21
- # Specialized Coding Logic LLM (32B)
22
-
23
- This model is a specialized fine-tuned version of [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct).
24
- It has been optimized to enhance **logical reasoning** and **code generation capabilities**.
25
-
26
- ## 🧠 Model Description
27
 
28
- **Specialized Coding Logic LLM** builds upon the powerful Qwen 2.5 Coder architecture (32B parameters). It has been fine-tuned using the **DeepThink-Code-Lite** dataset to improve its ability to:
29
- - Solve complex algorithmic problems.
30
- - Follow multi-step logical instructions.
31
- - Generate cleaner and more optimized code.
32
 
33
- ## 📊 Dataset
34
-
35
- This model was trained on the custom dataset:
36
- 👉 **[Jessylg27/DeepThink-Code-Lite](https://huggingface.co/datasets/Jessylg27/DeepThink-Code-Lite)**
37
-
38
- ## 🚀 Quick Start
39
-
40
- You can use this model directly with the Hugging Face `pipeline`.
41
 
42
  ```python
43
  from transformers import pipeline
44
 
45
- # Define the model ID
46
- model_id = "Jessylg27/specialized-coding-logic-llm"
47
-
48
- # Initialize the pipeline
49
- generator = pipeline("text-generation", model=model_id, device_map="auto")
50
-
51
- # Prompt the model
52
- question = "Write a Python function to solve the Traveling Salesman Problem using dynamic programming."
53
- output = generator([{"role": "user", "content": question}], max_new_tokens=512, return_full_text=False)[0]
54
-
55
  print(output["generated_text"])
56
-
57
  ```
58
 
59
- ## 🛠️ Training procedure
 
 
60
 
61
- This model was trained with **SFT (Supervised Fine-Tuning)** using the [TRL library](https://github.com/huggingface/trl) and [Unsloth](https://github.com/unslothai/unsloth) for efficient training.
 
62
 
63
  ### Framework versions
64
 
65
- * **PEFT:** 0.18.1
66
- * **TRL:** 0.24.0
67
- * **Transformers:** 4.57.3
68
- * **Pytorch:** 2.8.0+cu128
69
- * **Datasets:** 4.3.0
70
- * **Tokenizers:** 0.22.2
 
 
71
 
72
- ## 📜 Citations
73
 
74
- If you use this model or the TRL library, please cite:
75
 
 
 
76
  ```bibtex
77
  @misc{vonwerra2022trl,
78
  title = {{TRL: Transformer Reinforcement Learning}},
@@ -80,7 +58,6 @@ If you use this model or the TRL library, please cite:
80
  year = 2020,
81
  journal = {GitHub repository},
82
  publisher = {GitHub},
83
- howpublished = {\url{[https://github.com/huggingface/trl](https://github.com/huggingface/trl)}}
84
  }
85
-
86
  ```
 
1
  ---
2
+ base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
 
3
  library_name: peft
4
+ model_name: output_model
 
 
 
 
 
5
  tags:
6
+ - base_model:adapter:unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
7
+ - lora
 
 
 
8
  - sft
9
+ - transformers
10
  - trl
11
+ - unsloth
12
+ licence: license
13
+ pipeline_tag: text-generation
14
  ---
15
 
16
+ # Model Card for output_model
 
 
 
 
 
17
 
18
+ This model is a fine-tuned version of [unsloth/qwen2.5-coder-32b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-coder-32b-instruct-bnb-4bit).
19
+ It has been trained using [TRL](https://github.com/huggingface/trl).
 
 
20
 
21
+ ## Quick start
 
 
 
 
 
 
 
22
 
23
  ```python
24
  from transformers import pipeline
25
 
26
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
27
+ generator = pipeline("text-generation", model="None", device="cuda")
28
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
 
 
 
 
 
 
 
29
  print(output["generated_text"])
 
30
  ```
31
 
32
+ ## Training procedure
33
+
34
+
35
 
36
+
37
+ This model was trained with SFT.
38
 
39
  ### Framework versions
40
 
41
+ - PEFT 0.18.1
42
+ - TRL: 0.23.1
43
+ - Transformers: 4.57.1
44
+ - Pytorch: 2.9.0+cu128
45
+ - Datasets: 4.3.0
46
+ - Tokenizers: 0.22.2
47
+
48
+ ## Citations
49
 
 
50
 
 
51
 
52
+ Cite TRL as:
53
+
54
  ```bibtex
55
  @misc{vonwerra2022trl,
56
  title = {{TRL: Transformer Reinforcement Learning}},
 
58
  year = 2020,
59
  journal = {GitHub repository},
60
  publisher = {GitHub},
61
+ howpublished = {\url{https://github.com/huggingface/trl}}
62
  }
 
63
  ```
adapter_config.json CHANGED
@@ -33,13 +33,13 @@
33
  "rank_pattern": {},
34
  "revision": null,
35
  "target_modules": [
36
- "q_proj",
37
- "o_proj",
38
  "gate_proj",
 
39
  "down_proj",
 
40
  "k_proj",
41
  "v_proj",
42
- "up_proj"
43
  ],
44
  "target_parameters": null,
45
  "task_type": "CAUSAL_LM",
 
33
  "rank_pattern": {},
34
  "revision": null,
35
  "target_modules": [
 
 
36
  "gate_proj",
37
+ "o_proj",
38
  "down_proj",
39
+ "up_proj",
40
  "k_proj",
41
  "v_proj",
42
+ "q_proj"
43
  ],
44
  "target_parameters": null,
45
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d78ad7a4ff610ee2a0bc0d954309434916fb2a93651554cb9444e98d056e64d3
3
  size 536991984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3a4f541e1d5d34a5ddd5a9f2fd2668146a75c7f1e099014482e572b8c47f075
3
  size 536991984
chat_template.jinja CHANGED
@@ -1,139 +1,53 @@
1
- {{- bos_token }}
2
- {%- if custom_tools is defined %}
3
- {%- set tools = custom_tools %}
4
- {%- endif %}
5
- {%- if not tools_in_user_message is defined %}
6
- {%- set tools_in_user_message = true %}
7
- {%- endif %}
8
- {%- if not date_string is defined %}
9
- {%- set date_string = "26 July 2024" %}
10
- {%- endif %}
11
- {%- if not tools is defined %}
12
- {%- set tools = none %}
13
- {%- endif %}
14
-
15
- {#- This block extracts the system message, so we can slot it into the right place. #}
16
- {%- if messages[0]['role'] == 'system' %}
17
- {%- set system_message = messages[0]['content'] %}
18
- {%- set messages = messages[1:] %}
19
- {%- else %}
20
- {%- set system_message = "" %}
21
- {%- endif %}
22
-
23
- {#- System message + builtin tools #}
24
- {{- "<|start_header_id|>system<|end_header_id|>
25
-
26
- " }}
27
- {%- if builtin_tools is defined or tools is not none %}
28
- {{- "Environment: ipython
29
- " }}
30
- {%- endif %}
31
- {%- if builtin_tools is defined %}
32
- {{- "Tools: " + builtin_tools | reject('equalto', 'code_interpreter') | join(", ") + "
33
-
34
- "}}
35
- {%- endif %}
36
- {{- "Cutting Knowledge Date: December 2023
37
- " }}
38
- {{- "Today Date: " + date_string + "
39
-
40
- " }}
41
- {%- if tools is not none and not tools_in_user_message %}
42
- {{- "You have access to the following functions. To call a function, please respond with JSON for a function call." }}
43
- {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
44
- {{- "Do not use variables.
45
-
46
- " }}
47
- {%- for t in tools %}
48
- {{- t | tojson(indent=4) }}
49
- {{- "
50
-
51
- " }}
52
- {%- endfor %}
53
- {%- endif %}
54
- {{- system_message }}
55
- {{- "<|eot_id|>" }}
56
-
57
- {#- Custom tools are passed in a user message with some extra guidance #}
58
- {%- if tools_in_user_message and not tools is none %}
59
- {#- Extract the first user message so we can plug it in here #}
60
- {%- if messages | length != 0 %}
61
- {%- set first_user_message = messages[0]['content'] %}
62
- {%- set messages = messages[1:] %}
63
  {%- else %}
64
- {{- raise_exception("Cannot put tools in the first user message when there's no first user message!") }}
65
- {%- endif %}
66
- {{- '<|start_header_id|>user<|end_header_id|>
67
-
68
- ' -}}
69
- {{- "Given the following functions, please respond with a JSON for a function call " }}
70
- {{- "with its proper arguments that best answers the given prompt.
71
-
72
- " }}
73
- {{- 'Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.' }}
74
- {{- "Do not use variables.
75
-
76
- " }}
77
- {%- for t in tools %}
78
- {{- t | tojson(indent=4) }}
79
- {{- "
80
-
81
- " }}
82
  {%- endfor %}
83
- {{- first_user_message + "<|eot_id|>"}}
 
 
 
 
 
 
84
  {%- endif %}
85
-
86
  {%- for message in messages %}
87
- {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}
88
- {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>
89
-
90
- '+ message['content'] + '<|eot_id|>' }}
91
- {%- elif 'tool_calls' in message %}
92
- {%- if not message.tool_calls|length == 1 %}
93
- {{- raise_exception("This model only supports single tool-calls at once!") }}
94
  {%- endif %}
95
- {%- set tool_call = message.tool_calls[0].function %}
96
- {%- if builtin_tools is defined and tool_call.name in builtin_tools %}
97
- {{- '<|start_header_id|>assistant<|end_header_id|>
98
-
99
- ' -}}
100
- {{- "<|python_tag|>" + tool_call.name + ".call(" }}
101
- {%- for arg_name, arg_val in tool_call.arguments | items %}
102
- {{- arg_name + '="' + arg_val + '"' }}
103
- {%- if not loop.last %}
104
- {{- ", " }}
105
- {%- endif %}
106
- {%- endfor %}
107
- {{- ")" }}
108
- {%- else %}
109
- {{- '<|start_header_id|>assistant<|end_header_id|>
110
-
111
- ' -}}
112
- {{- '{"name": "' + tool_call.name + '", ' }}
113
- {{- '"parameters": ' }}
114
  {{- tool_call.arguments | tojson }}
115
- {{- "}" }}
116
- {%- endif %}
117
- {%- if builtin_tools is defined %}
118
- {#- This means we're in ipython mode #}
119
- {{- "<|eom_id|>" }}
120
- {%- else %}
121
- {{- "<|eot_id|>" }}
122
  {%- endif %}
123
- {%- elif message.role == "tool" or message.role == "ipython" %}
124
- {{- "<|start_header_id|>ipython<|end_header_id|>
125
-
126
- " }}
127
- {%- if message.content is mapping or message.content is iterable %}
128
- {{- message.content | tojson }}
129
- {%- else %}
130
- {{- message.content }}
131
  {%- endif %}
132
- {{- "<|eot_id|>" }}
133
  {%- endif %}
134
  {%- endfor %}
135
  {%- if add_generation_prompt %}
136
- {{- '<|start_header_id|>assistant<|end_header_id|>
137
-
138
- ' }}
139
  {%- endif %}
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  {%- else %}
6
+ {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
 
 
 
 
 
 
 
 
 
 
 
 
12
  {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- else %}
18
+ {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
19
+ {%- endif %}
20
  {%- endif %}
 
21
  {%- for message in messages %}
22
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
+ {%- elif message.role == "assistant" %}
25
+ {{- '<|im_start|>' + message.role }}
26
+ {%- if message.content %}
27
+ {{- '\n' + message.content }}
 
28
  {%- endif %}
29
+ {%- for tool_call in message.tool_calls %}
30
+ {%- if tool_call.function is defined %}
31
+ {%- set tool_call = tool_call.function %}
32
+ {%- endif %}
33
+ {{- '\n<tool_call>\n{"name": "' }}
34
+ {{- tool_call.name }}
35
+ {{- '", "arguments": ' }}
 
 
 
 
 
 
 
 
 
 
 
 
36
  {{- tool_call.arguments | tojson }}
37
+ {{- '}\n</tool_call>' }}
38
+ {%- endfor %}
39
+ {{- '<|im_end|>\n' }}
40
+ {%- elif message.role == "tool" %}
41
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }}
 
 
42
  {%- endif %}
43
+ {{- '\n<tool_response>\n' }}
44
+ {{- message.content }}
45
+ {{- '\n</tool_response>' }}
46
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
47
+ {{- '<|im_end|>\n' }}
 
 
 
48
  {%- endif %}
 
49
  {%- endif %}
50
  {%- endfor %}
51
  {%- if add_generation_prompt %}
52
+ {{- '<|im_start|>assistant\n' }}
 
 
53
  {%- endif %}
config.json CHANGED
@@ -87,7 +87,7 @@
87
  "rope_theta": 1000000.0,
88
  "sliding_window": null,
89
  "tie_word_embeddings": false,
90
- "transformers_version": "4.57.3",
91
  "unsloth_fixed": true,
92
  "unsloth_version": "2026.1.3",
93
  "use_cache": true,
 
87
  "rope_theta": 1000000.0,
88
  "sliding_window": null,
89
  "tie_word_embeddings": false,
90
+ "transformers_version": "4.57.1",
91
  "unsloth_fixed": true,
92
  "unsloth_version": "2026.1.3",
93
  "use_cache": true,
model-00001-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:adb7961d556f392969c770d63c425062503f3448f2a5b800ae577cbe3907ea7b
3
  size 4891730992
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bde5ead684e85c851ac07cd19a6666e68a6e5a10ce0778e974d54972d759bd2e
3
  size 4891730992
model-00002-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:304876107b828a36354a318b6660e401e0b60c5e3c0959b558157628a9cab7e9
3
  size 4876059352
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:818134311a0ba132e44c8c591615c6c39559049c85c1bbc8ff53a149b23bdb4b
3
  size 4876059352
model-00003-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:141fabd77155b9747fb5961ecfc93e0ec14591b41703a2ea69060f26c01fa8ac
3
  size 4876059384
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75786bf85845c87af719d6b7b653b7c3ed627f4782d0c25ec086365a758217b2
3
  size 4876059384
model-00004-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3745a64bbbeb5011823a768a2ea3ba0feecfb2f21c0a48866978ad515020a34f
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88ecfcfb0b660595c6f2cfc08c9acf7532e084ef1cff33befdc469f2c81c1dba
3
  size 4876059416
model-00005-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:add239dcff71f899fbc5f0a22aa1745d5954044385ce46cf0ef0bcbf178e68e7
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c30687123d9060d6159b2271df3ffd833be6a738dbf01f32609ebeb6e524a658
3
  size 4876059416
model-00006-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92e4c77b2746805647307b26e266a8dfaa4669c486daef5a967ee9a288c26a5b
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5e266bef1de4aa2aaf6f8a053734d310c14ab909171580d7a66a6289d00f3ab
3
  size 4876059416
model-00007-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:042d6d35c86794a50be5506fe86ee54a67899f57465fc4a08341cd960420840a
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9fdc616b259d94c4a104f1f11fa34d03152318794f05f0987abc3b33a8b9a90
3
  size 4876059416
model-00008-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e28a49305cb327dc8bd53fe5678b26b07d3b824343dcfea119562b20b32d55ff
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41a3a914a0ae955cffae2a43539d7d44d9efebfe1a42eb03aeda5ef08e9d0183
3
  size 4876059416
model-00009-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:59875cba39959c38385d0018d4e0e093da20783d6ab1e72fffdb439d307e093c
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:710ff26625cc64a030058c5a242da3768c9d0e576742811d18db4ba1c7d8c4e8
3
  size 4876059416
model-00010-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f717edeafc543332c8df42b4ecbe6b15ffc6451a52c4ac7eeda6f8f3d023eb4f
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbc0fac23b287ed7119189de5757ef3d121422805b3ec7cdcd7cc0bb2eeafb94
3
  size 4876059416
model-00011-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:190a001b299c0351895cb3892aca196d2fbf06541e1f7cce71fedeb5e403919d
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da3134b380b412c4160512358e6f4dd7e9f88a045e6ccdd4fecbc8962b804285
3
  size 4876059416
model-00012-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f019e7ac5e81583113fbff3e77223be0e3dd450e369ad207b7c478ed93eccb6
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:107b71f12cacbdcf75057c3658dd839d2b5052b7e38c81748ad5ebe2a52d7719
3
  size 4876059416
model-00013-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d1957acb85db5457fcd18c0b5d90605d04e68e2704980884fbcfa2d5ff8f64ee
3
  size 4876059416
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e39164291d88362be166e79f0cca048c2de337f436de7d73cfb732a25acce6c1
3
  size 4876059416
model-00014-of-00014.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e01df2a39c9516306b352b7ad68b5a9dd1eed1a5bec5db67fa3a2981d1e630df
3
  size 2123397800
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01e90d0a3541794a6fb20eadc65c938fe4624807e5d066000503ff01552d181d
3
  size 2123397800
tokenizer_config.json CHANGED
@@ -213,5 +213,5 @@
213
  "split_special_tokens": false,
214
  "tokenizer_class": "Qwen2Tokenizer",
215
  "unk_token": null,
216
- "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- set date_string = \"26 July 2024\" %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content'] %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message + builtin tools #}\n{{- \"<|start_header_id|>system<|end_header_id|>\n\n\" }}\n{%- if builtin_tools is defined or tools is not none %}\n {{- \"Environment: ipython\n\" }}\n{%- endif %}\n{%- if builtin_tools is defined %}\n {{- \"Tools: \" + builtin_tools | reject('equalto', 'code_interpreter') | join(\", \") + \"\n\n\"}}\n{%- endif %}\n{{- \"Cutting Knowledge Date: December 2023\n\" }}\n{{- \"Today Date: \" + date_string + \"\n\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\n\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\n\n\" }}\n {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0]['content'] %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n {{- '<|start_header_id|>user<|end_header_id|>\n\n' -}}\n {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n {{- \"with its proper arguments that best answers the given prompt.\n\n\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\n\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\n\n\" }}\n {%- endfor %}\n {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] + '<|eot_id|>' }}\n {%- elif 'tool_calls' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {%- if builtin_tools is defined and tool_call.name in builtin_tools %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}\n {{- \"<|python_tag|>\" + tool_call.name + \".call(\" }}\n {%- for arg_name, arg_val in tool_call.arguments | items %}\n {{- arg_name + '=\"' + arg_val + '\"' }}\n {%- if not loop.last %}\n {{- \", \" }}\n {%- endif %}\n {%- endfor %}\n {{- \")\" }}\n {%- else %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' -}}\n {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n {{- '\"parameters\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- \"}\" }}\n {%- endif %}\n {%- if builtin_tools is defined %}\n {#- This means we're in ipython mode #}\n {{- \"<|eom_id|>\" }}\n {%- else %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n {{- \"<|start_header_id|>ipython<|end_header_id|>\n\n\" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}\n{%- endif %}\n"
217
  }
 
213
  "split_special_tokens": false,
214
  "tokenizer_class": "Qwen2Tokenizer",
215
  "unk_token": null,
216
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %} {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n"
217
  }