--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit library_name: transformers model_name: kto_simplification_imbalanced tags: - generated_from_trainer - unsloth - kto - trl licence: license --- # Model Card for kto_simplification_imbalanced This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="IoakeimE/kto_simplification_imbalanced", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/ioakeime-aristotle-university-of-thessaloniki/kto_smiplification_imbalanced/runs/po8ovzhv) This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306). ### Framework versions - TRL: 0.19.0 - Transformers: 4.53.0 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citations Cite KTO as: ```bibtex @article{ethayarajh2024kto, title = {{KTO: Model Alignment as Prospect Theoretic Optimization}}, author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela}, year = 2024, eprint = {arXiv:2402.01306}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```