File size: 871 Bytes
723ee84 f03c942 723ee84 1c99c55 723ee84 1c99c55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
---
library_name: transformers
tags:
- DPO
- Humor
- Humor Generation
license: llama2
base_model:
- meta-llama/Llama-2-7b-hf
---
A humor-focused language model trained on prompts and completions scraped from a subreddit known for its comedic content. The model undergoes Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) using LoRA to optimize its parameters efficiently. Following these steps, the model is further refined using Direct Preference Optimization (DPO), which aligns it with human preferences by leveraging chosen and rejected responses from the dataset. This multi-stage training pipeline ensures the model generates contextually appropriate and humorous outputs while maintaining computational efficiency.
The SFT-trained version can be found here: [Humorous_SFT_LLama2_7b](https://huggingface.co/ALEXIOSTER/Humorous_SFT_LLama2_7b). |