AlexL0701 commited on
Commit
5913d56
·
verified ·
1 Parent(s): 98e5065

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -5,3 +5,13 @@ base_model:
5
  - meta-llama/Meta-Llama-3.1-8B
6
  ---
7
 
 
 
 
 
 
 
 
 
 
 
 
5
  - meta-llama/Meta-Llama-3.1-8B
6
  ---
7
 
8
+
9
+
10
+
11
+ This repository fixes the original broken-model so it works correctly with OpenAI-style chat APIs such as /chat/completions.
12
+
13
+ The problem was that the original tokenizer configuration did not define a chat template. When a model is served with Text Generation Inference, chat requests are sent as structured messages with roles like user and assistant. Without a chat template, the server does not know how to convert those messages into the text format expected by the model, which causes chat requests to fail.
14
+
15
+ To fix this, tokenizer_config.json was updated. The bos_token was set to <|endoftext|> instead of null, and a chat_template was added. The chat template formats each message using <|im_start|> and <|im_end|> tokens and includes the role and content of each message. This matches the expected Qwen-style prompt format.
16
+
17
+ With these changes, the server can correctly serialize chat messages into a prompt and the model can generate responses normally when using /chat/completions.