checkpoints / README.md
lvwerra's picture
lvwerra HF Staff
lvwerra/atomiclm-dev
6d5862e verified
metadata
library_name: transformers
model_name: checkpoints
tags:
  - generated_from_trainer
  - >-
    trackio:https://lvwerra-atomiclm-chat.hf.space?project=huggingface&runs=smoltalk-5ep&sidebar=collapsed
  - trackio
  - trl
  - sft
licence: license

Model Card for checkpoints

This model is a fine-tuned version of None. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lvwerra/checkpoints", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

Visualize in Trackio

This model was trained with SFT.

Framework versions

  • TRL: 0.29.0
  • Transformers: 5.3.0
  • Pytorch: 2.10.0
  • Datasets: 4.6.1
  • Tokenizers: 0.22.2

Citations

Cite TRL as:

@software{vonwerra2020trl,
  title   = {{TRL: Transformers Reinforcement Learning}},
  author  = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
  license = {Apache-2.0},
  url     = {https://github.com/huggingface/trl},
  year    = {2020}
}