--- base_model: Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8 library_name: transformers model_name: tiny-think-dpo-math-stem-apo_zero-beta1-lr3e-6-e1-bs8 tags: - generated_from_trainer - dpo - trl licence: license --- # Model Card for tiny-think-dpo-math-stem-apo_zero-beta1-lr3e-6-e1-bs8 This model is a fine-tuned version of [Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8](https://huggingface.co/Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8). It has been trained using [TRL](https://github.com/huggingface/trl). ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.26.2 - Transformers: 4.57.5 - Pytorch: 2.9.0+cu128 - Datasets: 4.5.0 - Tokenizers: 0.22.2