Update README.md
Browse files
README.md
CHANGED
|
@@ -18,8 +18,8 @@ This is a 0.6B parameter LLM designed for synthetic reasoning generation between
|
|
| 18 |
|
| 19 |
For example, this model allows you to turn any chat dataset into a reasoning dataset as if it was generated by DeepSeek R1 or Openai's GPT OSS!
|
| 20 |
|
| 21 |
-
# 👀 EXAMPLE DATASET
|
| 22 |
-
https://huggingface.co/Pinkstack/syngen-reasoning-example-80-smoltalk1
|
| 23 |
|
| 24 |
# 🤔 HOW TO USE
|
| 25 |
|
|
@@ -107,4 +107,7 @@ Reasoning effort: low<|im_end|>
|
|
| 107 |
<generated_thinking_gpt>
|
| 108 |
Must produce a simple greeting.
|
| 109 |
</generated_thinking_gpt>
|
| 110 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
For example, this model allows you to turn any chat dataset into a reasoning dataset as if it was generated by DeepSeek R1 or Openai's GPT OSS!
|
| 20 |
|
| 21 |
+
# 👀 EXAMPLE DATASET GENERATED WITH IT
|
| 22 |
+
https://huggingface.co/datasets/Pinkstack/syngen-reasoning-example-80-smoltalk1
|
| 23 |
|
| 24 |
# 🤔 HOW TO USE
|
| 25 |
|
|
|
|
| 107 |
<generated_thinking_gpt>
|
| 108 |
Must produce a simple greeting.
|
| 109 |
</generated_thinking_gpt>
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
Liked the model? Need help with it? Do you know how to improve it further? Please make a post in the community tab.
|