moos124's picture
moos124/gold-code-deepspeed-test
ea3b508 verified
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: gold-code-deepspeed-test
tags:
- generated_from_trainer
- trl
- gold
- hf_jobs
licence: license
---
# Model Card for gold-code-deepspeed-test
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="moos124/gold-code-deepspeed-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GOLD.
### Framework versions
- TRL: 1.3.0
- Transformers: 5.7.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.8.5
- Tokenizers: 0.22.2
## Citations
Cite GOLD as:
```bibtex
@misc{patino2025unlocking,
title = {{Unlocking On-Policy Distillation for Any Model Family}},
author = {Carlos Miguel Patiño and Kashif Rasul and Quentin Gallouédec and Ben Burtenshaw and Sergio Paniego and Vaibhav Srivastav and Thibaud Frere and Ed Beeching and Lewis Tunstall and Leandro von Werra and Thomas Wolf},
year = 2025,
url = {https://huggingface.co/spaces/HuggingFaceH4/general-on-policy-logit-distillation},
}
```
Cite TRL as:
```bibtex
@software{vonwerra2020trl,
title = {{TRL: Transformers Reinforcement Learning}},
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
license = {Apache-2.0},
url = {https://github.com/huggingface/trl},
year = {2020}
}
```