damienbenveniste commited on
Commit
39ae472
·
verified ·
1 Parent(s): 8fff520

End of training

Browse files
README.md CHANGED
@@ -1,43 +1,67 @@
1
  ---
2
- license: apache-2.0
 
 
3
  tags:
4
- - trl
5
  - ppo
6
- - transformers
7
- - reinforcement-learning
8
  ---
9
 
10
- # TRL Model
11
 
12
- This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
13
- guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
14
 
15
- ## Usage
16
 
17
- To use this model for inference, first install the TRL library:
 
18
 
19
- ```bash
20
- python -m pip install trl
 
 
21
  ```
22
 
23
- You can then generate text as follows:
24
 
25
- ```python
26
- from transformers import pipeline
27
 
28
- generator = pipeline("text-generation", model="damienbenveniste//private/var/folders/dy/k5ycdcns28s2cxl8hc76v2mr0000gn/T/tmpl2a_v6sk/damienbenveniste/mistral-ppo")
29
- outputs = generator("Hello, my llama is cute")
30
- ```
31
 
32
- If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
33
 
34
- ```python
35
- from transformers import AutoTokenizer
36
- from trl import AutoModelForCausalLMWithValueHead
 
 
 
 
37
 
38
- tokenizer = AutoTokenizer.from_pretrained("damienbenveniste//private/var/folders/dy/k5ycdcns28s2cxl8hc76v2mr0000gn/T/tmpl2a_v6sk/damienbenveniste/mistral-ppo")
39
- model = AutoModelForCausalLMWithValueHead.from_pretrained("damienbenveniste//private/var/folders/dy/k5ycdcns28s2cxl8hc76v2mr0000gn/T/tmpl2a_v6sk/damienbenveniste/mistral-ppo")
40
 
41
- inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
42
- outputs = model(**inputs, labels=inputs["input_ids"])
 
 
 
 
 
 
 
43
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: damienbenveniste/mistral-supervised
3
+ library_name: transformers
4
+ model_name: mistral-ppo
5
  tags:
6
+ - generated_from_trainer
7
  - ppo
8
+ - trl
9
+ licence: license
10
  ---
11
 
12
+ # Model Card for mistral-ppo
13
 
14
+ This model is a fine-tuned version of [damienbenveniste/mistral-supervised](https://huggingface.co/damienbenveniste/mistral-supervised).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="damienbenveniste/mistral-ppo", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
  ```
27
 
28
+ ## Training procedure
29
 
30
+
 
31
 
 
 
 
32
 
33
+ This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593).
34
 
35
+ ### Framework versions
36
+
37
+ - TRL: 0.19.1
38
+ - Transformers: 4.52.4
39
+ - Pytorch: 2.7.0
40
+ - Datasets: 3.6.0
41
+ - Tokenizers: 0.21.1
42
 
43
+ ## Citations
 
44
 
45
+ Cite PPO as:
46
+
47
+ ```bibtex
48
+ @article{mziegler2019fine-tuning,
49
+ title = {{Fine-Tuning Language Models from Human Preferences}},
50
+ author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
51
+ year = 2019,
52
+ eprint = {arXiv:1909.08593}
53
+ }
54
  ```
55
+
56
+ Cite TRL as:
57
+
58
+ ```bibtex
59
+ @misc{vonwerra2022trl,
60
+ title = {{TRL: Transformer Reinforcement Learning}},
61
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
62
+ year = 2020,
63
+ journal = {GitHub repository},
64
+ publisher = {GitHub},
65
+ howpublished = {\url{https://github.com/huggingface/trl}}
66
+ }
67
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if message.content is string %}
27
+ {%- set content = message.content %}
28
+ {%- else %}
29
+ {%- set content = '' %}
30
+ {%- endif %}
31
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
+ {%- elif message.role == "assistant" %}
34
+ {%- set reasoning_content = '' %}
35
+ {%- if message.reasoning_content is string %}
36
+ {%- set reasoning_content = message.reasoning_content %}
37
+ {%- else %}
38
+ {%- if '</think>' in content %}
39
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
+ {%- endif %}
42
+ {%- endif %}
43
+ {%- if loop.index0 > ns.last_query_index %}
44
+ {%- if loop.last or (not loop.last and reasoning_content) %}
45
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
+ {%- else %}
47
+ {{- '<|im_start|>' + message.role + '\n' + content }}
48
+ {%- endif %}
49
+ {%- else %}
50
+ {{- '<|im_start|>' + message.role + '\n' + content }}
51
+ {%- endif %}
52
+ {%- if message.tool_calls %}
53
+ {%- for tool_call in message.tool_calls %}
54
+ {%- if (loop.first and content) or (not loop.first) %}
55
+ {{- '\n' }}
56
+ {%- endif %}
57
+ {%- if tool_call.function %}
58
+ {%- set tool_call = tool_call.function %}
59
+ {%- endif %}
60
+ {{- '<tool_call>\n{"name": "' }}
61
+ {{- tool_call.name }}
62
+ {{- '", "arguments": ' }}
63
+ {%- if tool_call.arguments is string %}
64
+ {{- tool_call.arguments }}
65
+ {%- else %}
66
+ {{- tool_call.arguments | tojson }}
67
+ {%- endif %}
68
+ {{- '}\n</tool_call>' }}
69
+ {%- endfor %}
70
+ {%- endif %}
71
+ {{- '<|im_end|>\n' }}
72
+ {%- elif message.role == "tool" %}
73
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
+ {{- '<|im_start|>user' }}
75
+ {%- endif %}
76
+ {{- '\n<tool_response>\n' }}
77
+ {{- content }}
78
+ {{- '\n</tool_response>' }}
79
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- endif %}
82
+ {%- endif %}
83
+ {%- endfor %}
84
+ {%- if add_generation_prompt %}
85
+ {{- '<|im_start|>assistant\n' }}
86
+ {%- if enable_thinking is defined and enable_thinking is false %}
87
+ {{- '<think>\n\n</think>\n\n' }}
88
+ {%- endif %}
89
+ {%- endif %}
config.json CHANGED
@@ -1,12 +1,11 @@
1
  {
2
- "_name_or_path": "damienbenveniste/mistral-supervised",
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
6
  "attention_dropout": 0.0,
7
  "bos_token_id": 1,
8
- "eos_token_id": 2,
9
- "head_dim": 48,
10
  "hidden_act": "silu",
11
  "hidden_size": 768,
12
  "initializer_range": 0.02,
@@ -16,12 +15,13 @@
16
  "num_attention_heads": 16,
17
  "num_hidden_layers": 4,
18
  "num_key_value_heads": 8,
 
19
  "rms_norm_eps": 1e-06,
20
  "rope_theta": 10000.0,
21
  "sliding_window": 768,
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "float32",
24
- "transformers_version": "4.44.2",
25
  "use_cache": true,
26
- "vocab_size": 32000
27
  }
 
1
  {
 
2
  "architectures": [
3
  "MistralForCausalLM"
4
  ],
5
  "attention_dropout": 0.0,
6
  "bos_token_id": 1,
7
+ "eos_token_id": 32002,
8
+ "head_dim": null,
9
  "hidden_act": "silu",
10
  "hidden_size": 768,
11
  "initializer_range": 0.02,
 
15
  "num_attention_heads": 16,
16
  "num_hidden_layers": 4,
17
  "num_key_value_heads": 8,
18
+ "pad_token_id": 2,
19
  "rms_norm_eps": 1e-06,
20
  "rope_theta": 10000.0,
21
  "sliding_window": 768,
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "float32",
24
+ "transformers_version": "4.52.4",
25
  "use_cache": true,
26
+ "vocab_size": 32064
27
  }
generation_config.json CHANGED
@@ -1,6 +1,5 @@
1
  {
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
- "eos_token_id": 2,
5
- "transformers_version": "4.44.2"
6
  }
 
1
  {
2
  "_from_model_config": true,
3
  "bos_token_id": 1,
4
+ "transformers_version": "4.52.4"
 
5
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85ee50a22dabf1652e2365843c33fbb25cab5fa12acf25bc0c36147211f49b65
3
- size 338200972
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7942f5514948f01ae97d037aeea8f918367c1c9fb12e4f747d69cb0af3ccddd
3
+ size 338590928
special_tokens_map.json CHANGED
@@ -6,13 +6,7 @@
6
  "rstrip": false,
7
  "single_word": false
8
  },
9
- "eos_token": {
10
- "content": "</s>",
11
- "lstrip": false,
12
- "normalized": false,
13
- "rstrip": false,
14
- "single_word": false
15
- },
16
  "pad_token": {
17
  "content": "</s>",
18
  "lstrip": false,
 
6
  "rstrip": false,
7
  "single_word": false
8
  },
9
+ "eos_token": "<|im_end|>",
 
 
 
 
 
 
10
  "pad_token": {
11
  "content": "</s>",
12
  "lstrip": false,
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -26,12 +26,221 @@
26
  "rstrip": false,
27
  "single_word": false,
28
  "special": true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  }
30
  },
31
  "additional_special_tokens": [],
32
  "bos_token": "<s>",
33
  "clean_up_tokenization_spaces": false,
34
- "eos_token": "</s>",
 
35
  "legacy": false,
36
  "max_length": 512,
37
  "model_max_length": 1000000000000000019884624838656,
 
26
  "rstrip": false,
27
  "single_word": false,
28
  "special": true
29
+ },
30
+ "32000": {
31
+ "content": "<|endoftext|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "32001": {
39
+ "content": "<|im_start|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": true
45
+ },
46
+ "32002": {
47
+ "content": "<|im_end|>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": false,
51
+ "single_word": false,
52
+ "special": true
53
+ },
54
+ "32003": {
55
+ "content": "<|object_ref_start|>",
56
+ "lstrip": false,
57
+ "normalized": false,
58
+ "rstrip": false,
59
+ "single_word": false,
60
+ "special": true
61
+ },
62
+ "32004": {
63
+ "content": "<|object_ref_end|>",
64
+ "lstrip": false,
65
+ "normalized": false,
66
+ "rstrip": false,
67
+ "single_word": false,
68
+ "special": true
69
+ },
70
+ "32005": {
71
+ "content": "<|box_start|>",
72
+ "lstrip": false,
73
+ "normalized": false,
74
+ "rstrip": false,
75
+ "single_word": false,
76
+ "special": true
77
+ },
78
+ "32006": {
79
+ "content": "<|box_end|>",
80
+ "lstrip": false,
81
+ "normalized": false,
82
+ "rstrip": false,
83
+ "single_word": false,
84
+ "special": true
85
+ },
86
+ "32007": {
87
+ "content": "<|quad_start|>",
88
+ "lstrip": false,
89
+ "normalized": false,
90
+ "rstrip": false,
91
+ "single_word": false,
92
+ "special": true
93
+ },
94
+ "32008": {
95
+ "content": "<|quad_end|>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": false,
99
+ "single_word": false,
100
+ "special": true
101
+ },
102
+ "32009": {
103
+ "content": "<|vision_start|>",
104
+ "lstrip": false,
105
+ "normalized": false,
106
+ "rstrip": false,
107
+ "single_word": false,
108
+ "special": true
109
+ },
110
+ "32010": {
111
+ "content": "<|vision_end|>",
112
+ "lstrip": false,
113
+ "normalized": false,
114
+ "rstrip": false,
115
+ "single_word": false,
116
+ "special": true
117
+ },
118
+ "32011": {
119
+ "content": "<|vision_pad|>",
120
+ "lstrip": false,
121
+ "normalized": false,
122
+ "rstrip": false,
123
+ "single_word": false,
124
+ "special": true
125
+ },
126
+ "32012": {
127
+ "content": "<|image_pad|>",
128
+ "lstrip": false,
129
+ "normalized": false,
130
+ "rstrip": false,
131
+ "single_word": false,
132
+ "special": true
133
+ },
134
+ "32013": {
135
+ "content": "<|video_pad|>",
136
+ "lstrip": false,
137
+ "normalized": false,
138
+ "rstrip": false,
139
+ "single_word": false,
140
+ "special": true
141
+ },
142
+ "32014": {
143
+ "content": "<tool_call>",
144
+ "lstrip": false,
145
+ "normalized": false,
146
+ "rstrip": false,
147
+ "single_word": false,
148
+ "special": false
149
+ },
150
+ "32015": {
151
+ "content": "</tool_call>",
152
+ "lstrip": false,
153
+ "normalized": false,
154
+ "rstrip": false,
155
+ "single_word": false,
156
+ "special": false
157
+ },
158
+ "32016": {
159
+ "content": "<|fim_prefix|>",
160
+ "lstrip": false,
161
+ "normalized": false,
162
+ "rstrip": false,
163
+ "single_word": false,
164
+ "special": false
165
+ },
166
+ "32017": {
167
+ "content": "<|fim_middle|>",
168
+ "lstrip": false,
169
+ "normalized": false,
170
+ "rstrip": false,
171
+ "single_word": false,
172
+ "special": false
173
+ },
174
+ "32018": {
175
+ "content": "<|fim_suffix|>",
176
+ "lstrip": false,
177
+ "normalized": false,
178
+ "rstrip": false,
179
+ "single_word": false,
180
+ "special": false
181
+ },
182
+ "32019": {
183
+ "content": "<|fim_pad|>",
184
+ "lstrip": false,
185
+ "normalized": false,
186
+ "rstrip": false,
187
+ "single_word": false,
188
+ "special": false
189
+ },
190
+ "32020": {
191
+ "content": "<|repo_name|>",
192
+ "lstrip": false,
193
+ "normalized": false,
194
+ "rstrip": false,
195
+ "single_word": false,
196
+ "special": false
197
+ },
198
+ "32021": {
199
+ "content": "<|file_sep|>",
200
+ "lstrip": false,
201
+ "normalized": false,
202
+ "rstrip": false,
203
+ "single_word": false,
204
+ "special": false
205
+ },
206
+ "32022": {
207
+ "content": "<tool_response>",
208
+ "lstrip": false,
209
+ "normalized": false,
210
+ "rstrip": false,
211
+ "single_word": false,
212
+ "special": false
213
+ },
214
+ "32023": {
215
+ "content": "</tool_response>",
216
+ "lstrip": false,
217
+ "normalized": false,
218
+ "rstrip": false,
219
+ "single_word": false,
220
+ "special": false
221
+ },
222
+ "32024": {
223
+ "content": "<think>",
224
+ "lstrip": false,
225
+ "normalized": false,
226
+ "rstrip": false,
227
+ "single_word": false,
228
+ "special": false
229
+ },
230
+ "32025": {
231
+ "content": "</think>",
232
+ "lstrip": false,
233
+ "normalized": false,
234
+ "rstrip": false,
235
+ "single_word": false,
236
+ "special": false
237
  }
238
  },
239
  "additional_special_tokens": [],
240
  "bos_token": "<s>",
241
  "clean_up_tokenization_spaces": false,
242
+ "eos_token": "<|im_end|>",
243
+ "extra_special_tokens": {},
244
  "legacy": false,
245
  "max_length": 512,
246
  "model_max_length": 1000000000000000019884624838656,
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:383131c08d83084076f852ac7a30176581b7c578685bc7e6ae8ad17b29c03fdb
3
+ size 6545