| | --- |
| | base_model: Qwen/Qwen2.5-1.5B |
| | library_name: transformers |
| | model_name: 080bcf2f-eac4-47f9-9439-b106b1902f95 |
| | tags: |
| | - generated_from_trainer |
| | - trl |
| | - dpo |
| | licence: license |
| | --- |
| | |
| | # Model Card for 080bcf2f-eac4-47f9-9439-b106b1902f95 |
| |
|
| | This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B). |
| | It has been trained using [TRL](https://github.com/huggingface/trl). |
| |
|
| | ## Quick start |
| |
|
| | ```python |
| | from transformers import pipeline |
| | |
| | question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" |
| | generator = pipeline("text-generation", model="FormlessAI/080bcf2f-eac4-47f9-9439-b106b1902f95", device="cuda") |
| | output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] |
| | print(output["generated_text"]) |
| | ``` |
| |
|
| | ## Training procedure |
| |
|
| | [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/5e4wjr9l) |
| |
|
| |
|
| | This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). |
| |
|
| | ### Framework versions |
| |
|
| | - TRL: 0.17.0 |
| | - Transformers: 4.52.3 |
| | - Pytorch: 2.7.0+cu128 |
| | - Datasets: 3.6.0 |
| | - Tokenizers: 0.21.1 |
| |
|
| | ## Citations |
| |
|
| | Cite DPO as: |
| |
|
| | ```bibtex |
| | @inproceedings{rafailov2023direct, |
| | title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, |
| | author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, |
| | year = 2023, |
| | booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, |
| | url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, |
| | editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, |
| | } |
| | ``` |
| |
|
| | Cite TRL as: |
| | |
| | ```bibtex |
| | @misc{vonwerra2022trl, |
| | title = {{TRL: Transformer Reinforcement Learning}}, |
| | author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, |
| | year = 2020, |
| | journal = {GitHub repository}, |
| | publisher = {GitHub}, |
| | howpublished = {\url{https://github.com/huggingface/trl}} |
| | } |
| | ``` |