Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,50 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
base_model:
|
| 4 |
+
- Qwen/Qwen3-4B-Thinking-2507
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
tags:
|
| 7 |
+
- cot
|
| 8 |
+
- code
|
| 9 |
+
- gpt_oss
|
| 10 |
+
- conversational
|
| 11 |
+
- distillation
|
| 12 |
+
- math
|
| 13 |
---
|
| 14 |
|
| 15 |

|
| 16 |
+
|
| 17 |
+
# What it is
|
| 18 |
+
DistilGPT-OSS-qwen3-4B is a Qwen3 4B-2507 thinking fine tune, it supports up to **256K** tokens of input and output (aka total context) and can think for up to **65536** tokens when set to **high** reasoning effort. unlike the original qwen3, this model was fine-tuned on GPT-OSS reasoning outputs (unlike Deepseek r1 outputs which qwen3 was probably fine-tuned on for advanced reasoning). By fine-tuning on GPT-OSS outputs, the model was able to learn how to think efficiently, follow instructions better, and the new ability to think with a certain effort based on how much you want it to think.
|
| 19 |
+
|
| 20 |
+
⚠️This model is NOT as censored as the original GPT-OSS, we focused on performance rather than censorship. The model is still safety trained, it would just allow for more *"creative"* prompts, unlike GPT-OSS. We are not responsible for what the model generates.
|
| 21 |
+
|
| 22 |
+
Keep in mind, this is a community project and we are NOT related to qwen by Alibaba nor GPT-OSS by OpenAi.
|
| 23 |
+
|
| 24 |
+
# Format
|
| 25 |
+
This is the chat format of this model (you can also check the Jinja template file in "Files and versions"):
|
| 26 |
+
```
|
| 27 |
+
<|im_start|>system
|
| 28 |
+
You are a helpful assistant
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
Reasoning effort: low<|im_end|>
|
| 32 |
+
<|im_start|>user
|
| 33 |
+
Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,<|im_end|>
|
| 34 |
+
<|im_start|>assistant
|
| 35 |
+
<think>
|
| 36 |
+
Just continue: 13,21,34,...
|
| 37 |
+
</think>
|
| 38 |
+
The Fibonacci sequence continues by adding the two preceding numbers.
|
| 39 |
+
After **8** the next term is **13** (1 + 8 = 9 – 1 = 8 + 5 = 13), then **21** (8 + 13 = 21), followed by **34** (13 + 21 = 34), and so on.
|
| 40 |
+
|
| 41 |
+
So the sequence starts:
|
| 42 |
+
|
| 43 |
+
**1, 1, 2, 3, 5, 8, 13, 21, 34, …**<|im_end|>
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
As you can see, you set the reasoning effort via the system prompt. We recommend going **2** lines down and only then putting "Reasoning effort: [low,medium,high]. For your information that output was generated by our model.
|
| 47 |
+
|
| 48 |
+
# Additional information
|
| 49 |
+
|
| 50 |
+
The model was trained using unsloth, using a mix of private datasets and public datasets.
|