Stop generating token type IDs

#1
by lysandre HF Staff - opened
Files changed (2) hide show
  1. README.md +7 -2
  2. tokenizer_config.json +2 -1
README.md CHANGED
@@ -25,6 +25,10 @@ library_name: transformers
25
  11. [Citation](#citation)
26
 
27
 
 
 
 
 
28
  ---
29
 
30
  # Summary
@@ -229,9 +233,10 @@ model_inputs = tokenizer([text], return_tensors="pt")
229
  ```
230
 
231
  ## Usage Guidelines
232
- 1. Use the model’s default chat template, which already includes a system prompt. We recommend adding all other instructions within the user message.
233
  2. We recommend setting temperature to `0.6`.
234
- 3. We ensure the model starts with `Here are my reasoning steps:\n` during all our evaluations. This is implemented in the default chat template.
 
235
 
236
  ---
237
 
 
25
  11. [Citation](#citation)
26
 
27
 
28
+ ---
29
+
30
+ **Click here to skip to the technical report** -> https://huggingface.co/ServiceNow-AI/Apriel-1.5-15b-Thinker/blob/main/Apriel-1.5-Thinker.pdf
31
+
32
  ---
33
 
34
  # Summary
 
233
  ```
234
 
235
  ## Usage Guidelines
236
+ 1. Use the model’s default chat template, which already includes a system prompt.
237
  2. We recommend setting temperature to `0.6`.
238
+ 3. We ensure the model starts with `Here are my reasoning steps:\n` during all our evaluations. This is implemented in the default chat template.
239
+ 4. For multi-turn conversations, intermediate turns (historical model outputs) are expected to contain only the final response, without reasoning steps.
240
 
241
  ---
242
 
tokenizer_config.json CHANGED
@@ -8011,5 +8011,6 @@
8011
  "padding_side": "left",
8012
  "processor_class": "PixtralProcessor",
8013
  "tokenizer_class": "PreTrainedTokenizerFast",
8014
- "unk_token": "<unk>"
 
8015
  }
 
8011
  "padding_side": "left",
8012
  "processor_class": "PixtralProcessor",
8013
  "tokenizer_class": "PreTrainedTokenizerFast",
8014
+ "unk_token": "<unk>",
8015
+ "model_input_names": ["input_ids", "attention_mask"]
8016
  }