Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -17,7 +17,8 @@ base_model:
|
|
| 17 |
|
| 18 |
## 🔧 Prompt Format (Chat Template)
|
| 19 |
|
| 20 |
-
During
|
|
|
|
| 21 |
{question} Please reason step by step, and put your final answer within boxed{}.
|
| 22 |
|
| 23 |
Then wrapped using the chat template:
|
|
@@ -28,8 +29,7 @@ prompt = tokenizer.apply_chat_template(
|
|
| 28 |
tokenize=False,
|
| 29 |
add_generation_prompt=True,
|
| 30 |
)
|
| 31 |
-
|
| 32 |
-
|
| 33 |
|
| 34 |
## 🧪 Example Usage
|
| 35 |
|
|
@@ -53,7 +53,7 @@ prompt = tokenizer.apply_chat_template(
|
|
| 53 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 54 |
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 55 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 56 |
-
|
| 57 |
|
| 58 |
## 📎 Reference
|
| 59 |
|
|
|
|
| 17 |
|
| 18 |
## 🔧 Prompt Format (Chat Template)
|
| 19 |
|
| 20 |
+
During Inference, each question is formatted as:
|
| 21 |
+
|
| 22 |
{question} Please reason step by step, and put your final answer within boxed{}.
|
| 23 |
|
| 24 |
Then wrapped using the chat template:
|
|
|
|
| 29 |
tokenize=False,
|
| 30 |
add_generation_prompt=True,
|
| 31 |
)
|
| 32 |
+
```
|
|
|
|
| 33 |
|
| 34 |
## 🧪 Example Usage
|
| 35 |
|
|
|
|
| 53 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 54 |
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 55 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 56 |
+
```
|
| 57 |
|
| 58 |
## 📎 Reference
|
| 59 |
|