KristianS7 commited on
Commit
4a0ea2c
·
verified ·
1 Parent(s): 1ed0425

Fix bos/eos token IDs (config.json + tokenizer_config.json)

Browse files

## Problem

Both `bos_token` and `eos_token` are set to `<|endoftext|>` (id=0), but Ouro uses ChatML format where:

- `bos_token` should be `<|im_start|>` (id=1)
- `eos_token` should be `<|im_end|>` (id=2)

This causes issues with:
- Generation stopping: model never sees a proper EOS signal
- Tokenizer `add_special_tokens`: wrong BOS is prepended
- Downstream tools (vLLM, lm-eval-harness) that rely on `eos_token_id` for stop conditions

## Fix

**config.json:**
- `bos_token_id`: 0 → 1
- `eos_token_id`: 0 → 2

**tokenizer_config.json:**
- `bos_token`: `<|endoftext|>` → `<|im_start|>`
- `eos_token`: `<|endoftext|>` → `<|im_end|>`

See also: same fix merged for the Thinking variant — https://huggingface.co/ByteDance/Ouro-1.4B-Thinking/discussions/4

Files changed (2) hide show
  1. config.json +2 -2
  2. tokenizer_config.json +2 -2
config.json CHANGED
@@ -8,8 +8,8 @@
8
  "AutoModel": "modeling_ouro.OuroModel",
9
  "AutoModelForCausalLM": "modeling_ouro.OuroForCausalLM"
10
  },
11
- "bos_token_id": 0,
12
- "eos_token_id": 0,
13
  "head_dim": 128,
14
  "hidden_act": "silu",
15
  "hidden_size": 2048,
 
8
  "AutoModel": "modeling_ouro.OuroModel",
9
  "AutoModelForCausalLM": "modeling_ouro.OuroForCausalLM"
10
  },
11
+ "bos_token_id": 1,
12
+ "eos_token_id": 2,
13
  "head_dim": 128,
14
  "hidden_act": "silu",
15
  "hidden_size": 2048,
tokenizer_config.json CHANGED
@@ -157,10 +157,10 @@
157
  "<jupyter_script>",
158
  "<empty_output>"
159
  ],
160
- "bos_token": "<|endoftext|>",
161
  "clean_up_tokenization_spaces": false,
162
  "chat_template": "{%- if messages[0]['role'] == 'system' -%}{{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}{%- else -%}{{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}{%- endif -%}{%- for message in messages -%}{%- if message.role == 'system' and loop.first -%}{# Skip #}{%- else -%}{{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>' + '\\n' }}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{- '<|im_start|>assistant\\n' }}{%- endif -%}",
163
- "eos_token": "<|endoftext|>",
164
  "extra_special_tokens": {},
165
  "model_max_length": 131072,
166
  "tokenizer_class": "GPT2Tokenizer",
 
157
  "<jupyter_script>",
158
  "<empty_output>"
159
  ],
160
+ "bos_token": "<|im_start|>",
161
  "clean_up_tokenization_spaces": false,
162
  "chat_template": "{%- if messages[0]['role'] == 'system' -%}{{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}{%- else -%}{{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}{%- endif -%}{%- for message in messages -%}{%- if message.role == 'system' and loop.first -%}{# Skip #}{%- else -%}{{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>' + '\\n' }}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{- '<|im_start|>assistant\\n' }}{%- endif -%}",
163
+ "eos_token": "<|im_end|>",
164
  "extra_special_tokens": {},
165
  "model_max_length": 131072,
166
  "tokenizer_class": "GPT2Tokenizer",