--- base_model: Unsloth/qwen3-8b library_name: peft model_name: output_orpo tags: - base_model:adapter:Unsloth/qwen3-8b - lora - orpo - transformers - trl - unsloth licence: license pipeline_tag: text-generation --- # Model Card for output_orpo This model is a fine-tuned version of [Unsloth/qwen3-8b](https://huggingface.co/Unsloth/qwen3-8b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure lr 5e-5 This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691). ### Framework versions - PEFT 0.18.0 - TRL: 0.24.0 - Transformers: 4.57.1 - Pytorch: 2.9.0+cu128 - Datasets: 4.3.0 - Tokenizers: 0.22.1 ## Citations Cite ORPO as: ```bibtex @article{hong2024orpo, title = {{ORPO: Monolithic Preference Optimization without Reference Model}}, author = {Jiwoo Hong and Noah Lee and James Thorne}, year = 2024, eprint = {arXiv:2403.07691} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```