|
|
--- |
|
|
base_model: distilgpt2 |
|
|
library_name: transformers |
|
|
model_name: distilgpt2-dpo-checkpoint |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
- trl |
|
|
- dpo |
|
|
licence: license |
|
|
--- |
|
|
|
|
|
# Preference-Tuned Summarizer using Direct Preference Optimization (DPO) |
|
|
|
|
|
This repository hosts a lightweight text summarization model fine-tuned from DistilGPT2 using Direct Preference Optimization (DPO). The model was trained on preference-labeled data to generate summaries that align better with human preferences compared to traditional supervised fine-tuning. |
|
|
|
|
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
|
|
- **Base model:** DistilGPT2 |
|
|
- **Fine-tuning method:** Direct Preference Optimization (DPO) |
|
|
- **Dataset:** Preference pairs with `prompt`, `chosen`, and `rejected` summaries |
|
|
- **Evaluation metrics:** ROUGE-1 (0.2841), ROUGE-L (0.2247), BLEU (0.0286) |
|
|
- **Use case:** Generating high-quality, human-aligned text summaries |
|
|
|
|
|
--- |
|
|
|
|
|
## How to Use |
|
|
|
|
|
You can load and use the model easily with the Hugging Face Transformers library: |
|
|
|
|
|
```python |
|
|
from transformers import pipeline |
|
|
|
|
|
summarizer = pipeline("text-generation", model="justthzz/preference-tuned-summarizer") |
|
|
text = "Summarize: Your input text here." |
|
|
|
|
|
summary = summarizer(text, max_length=150, do_sample=False) |
|
|
print(summary[0]['generated_text']) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Files Included |
|
|
|
|
|
- `pytorch_model.bin` - Model weights |
|
|
- `config.json` - Model configuration |
|
|
- Tokenizer files (`tokenizer.json`, `vocab.txt`, etc.) |
|
|
- Model card and this README |
|
|
|
|
|
--- |
|
|
|
|
|
## About DPO |
|
|
|
|
|
Direct Preference Optimization is a fine-tuning technique that leverages preference-labeled datasets to directly optimize a model’s output preferences. This method improves alignment with human judgments beyond typical supervised fine-tuning. |
|
|
|
|
|
--- |
|
|
|
|
|
## Evaluation Results |
|
|
|
|
|
| Metric | Base Summary (avg) | DPO Summary (avg) | |
|
|
|----------|--------------------|-------------------| |
|
|
| ROUGE-1 | 0.0442 | **0.2841** | |
|
|
| ROUGE-L | 0.0366 | **0.2247** | |
|
|
| BLEU | 0.0000 | **0.0286** | |
|
|
|
|
|
--- |
|
|
|
|
|
## Links |
|
|
|
|
|
- [GitHub Repository](https://github.com/justthzz/preference-tuned-summarizer) |
|
|
- [Model on Hugging Face](https://huggingface.co/justthzz/preference-tuned-summarizer) |
|
|
|
|
|
--- |
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- TRL: 0.20.0.dev0 |
|
|
- Transformers: 4.53.0 |
|
|
- Pytorch: 2.6.0+cu124 |
|
|
- Datasets: 3.6.0 |
|
|
- Tokenizers: 0.21.2 |
|
|
|
|
|
## Citations |
|
|
|
|
|
Cite DPO as: |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{rafailov2023direct, |
|
|
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, |
|
|
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, |
|
|
year = 2023, |
|
|
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, |
|
|
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, |
|
|
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, |
|
|
} |
|
|
``` |
|
|
|
|
|
Cite TRL as: |
|
|
|
|
|
```bibtex |
|
|
@misc{vonwerra2022trl, |
|
|
title = {{TRL: Transformer Reinforcement Learning}}, |
|
|
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, |
|
|
year = 2020, |
|
|
journal = {GitHub repository}, |
|
|
publisher = {GitHub}, |
|
|
howpublished = {\url{https://github.com/huggingface/trl}} |
|
|
} |
|
|
``` |