Update README.md
Browse files
README.md
CHANGED
|
@@ -19,11 +19,6 @@ library_name: transformers
|
|
| 19 |
# gpt-oss-20b-Coding-Distill
|
| 20 |
This project uses Unsloth for fine-tuning. All training data is converted to OpenAI Harmony format before training, but there may be cases where the output format doesn't conform to the OpenAI Harmony specification.
|
| 21 |
|
| 22 |
-
## How I trained?
|
| 23 |
-
We have actually trained the code and made it publicly available on GitHub. The code is published here solely as a reference to help you perform high-quality fine-tuning.
|
| 24 |
-
|
| 25 |
-
**GitHub repo**: [midorin-Linux/gpt-oss-20b-Coding-Distill](https://github.com/midorin-Linux/gpt-oss-20b-Coding-Distill)
|
| 26 |
-
|
| 27 |
## Do you want to use pre-trained model?
|
| 28 |
You can download pre-trained data from HuggingFace.
|
| 29 |
|
|
|
|
| 19 |
# gpt-oss-20b-Coding-Distill
|
| 20 |
This project uses Unsloth for fine-tuning. All training data is converted to OpenAI Harmony format before training, but there may be cases where the output format doesn't conform to the OpenAI Harmony specification.
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
## Do you want to use pre-trained model?
|
| 23 |
You can download pre-trained data from HuggingFace.
|
| 24 |
|