| | --- |
| | base_model: cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr |
| | datasets: trl-lib/tldr |
| | library_name: transformers |
| | model_name: rloo_tldr |
| | tags: |
| | - generated_from_trainer |
| | - trl |
| | - rloo |
| | licence: license |
| | --- |
| | |
| | # Model Card for rloo_tldr |
| | |
| | This model is a fine-tuned version of [cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr](https://huggingface.co/cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr) on the [trl-lib/tldr](https://huggingface.co/datasets/trl-lib/tldr) dataset. |
| | It has been trained using [TRL](https://github.com/huggingface/trl). |
| | |
| | ## Quick start |
| | |
| | ```python |
| | from transformers import pipeline |
| | |
| | question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" |
| | generator = pipeline("text-generation", model="sergiopaniego/rloo_tldr", device="cuda") |
| | output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] |
| | print(output["generated_text"]) |
| | ``` |
| | |
| | ## Training procedure |
| | |
| | [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sergiopaniego/huggingface/runs/1pqm67uv) |
| | |
| | |
| | This model was trained with RLOO, a method introduced in [Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs](https://huggingface.co/papers/2402.14740). |
| | |
| | ### Framework versions |
| | |
| | - TRL: 0.28.0.dev0 |
| | - Transformers: 4.57.6 |
| | - Pytorch: 2.9.0 |
| | - Datasets: 4.0.0 |
| | - Tokenizers: 0.22.1 |
| | |
| | ## Citations |
| | |
| | Cite RLOO as: |
| | |
| | ```bibtex |
| | @inproceedings{ahmadian2024back, |
| | title = {{Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs}}, |
| | author = {Arash Ahmadian and Chris Cremer and Matthias Gall{'{e}} and Marzieh Fadaee and Julia Kreutzer and Olivier Pietquin and Ahmet {"{U}}st{"{u}}n and Sara Hooker}, |
| | year = 2024, |
| | booktitle = {Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2024, Bangkok, Thailand, August 11-16, 2024}, |
| | pages = {12248--12267}, |
| | publisher = {Association for Computational Linguistics}, |
| | editor = {Lun{-}Wei Ku and Andre Martins and Vivek Srikumar}, |
| | } |
| | ``` |
| | |
| | Cite TRL as: |
| | |
| | ```bibtex |
| | @software{vonwerra2020trl, |
| | title = {{TRL: Transformers Reinforcement Learning}}, |
| | author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin}, |
| | license = {Apache-2.0}, |
| | url = {https://github.com/huggingface/trl}, |
| | year = {2020} |
| | } |
| | ``` |