moos124's picture
Training in progress, step 12
c5e64c6 verified
metadata
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: gold-code-deepspeed-testV2
tags:
  - generated_from_trainer
  - trl
  - hf_jobs
  - gold
licence: license

Model Card for gold-code-deepspeed-testV2

This model is a fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="moos124/gold-code-deepspeed-testV2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

This model was trained with GOLD.

Framework versions

  • TRL: 1.3.0
  • Transformers: 5.7.0
  • Pytorch: 2.6.0+cu124
  • Datasets: 4.8.5
  • Tokenizers: 0.22.2

Citations

Cite GOLD as:

@misc{patino2025unlocking,
    title        = {{Unlocking On-Policy Distillation for Any Model Family}},
    author       = {Carlos Miguel Patiño and Kashif Rasul and Quentin Gallouédec and Ben Burtenshaw and Sergio Paniego and Vaibhav Srivastav and Thibaud Frere and Ed Beeching and Lewis Tunstall and Leandro von Werra and Thomas Wolf},
    year         = 2025,
    url          = {https://huggingface.co/spaces/HuggingFaceH4/general-on-policy-logit-distillation},
}

Cite TRL as:

@software{vonwerra2020trl,
  title   = {{TRL: Transformers Reinforcement Learning}},
  author  = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
  license = {Apache-2.0},
  url     = {https://github.com/huggingface/trl},
  year    = {2020}
}