File size: 3,566 Bytes
2f47f88
 
 
 
 
 
 
 
 
 
 
05c7838
2f47f88
05c7838
2f47f88
05c7838
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f47f88
 
 
 
05c7838
 
 
 
 
2f47f88
 
05c7838
 
 
 
 
 
 
 
2f47f88
05c7838
 
 
 
 
 
 
 
 
2f47f88
05c7838
 
 
 
 
2f47f88
05c7838
 
 
 
 
 
 
 
2f47f88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
base_model: distilgpt2
library_name: transformers
model_name: distilgpt2-dpo-checkpoint
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---

# Preference-Tuned Summarizer using Direct Preference Optimization (DPO)

This repository hosts a lightweight text summarization model fine-tuned from DistilGPT2 using Direct Preference Optimization (DPO). The model was trained on preference-labeled data to generate summaries that align better with human preferences compared to traditional supervised fine-tuning.

---

## Model Details

- **Base model:** DistilGPT2
- **Fine-tuning method:** Direct Preference Optimization (DPO)
- **Dataset:** Preference pairs with `prompt`, `chosen`, and `rejected` summaries
- **Evaluation metrics:** ROUGE-1 (0.2841), ROUGE-L (0.2247), BLEU (0.0286)
- **Use case:** Generating high-quality, human-aligned text summaries

---

## How to Use

You can load and use the model easily with the Hugging Face Transformers library:

```python
from transformers import pipeline

summarizer = pipeline("text-generation", model="justthzz/preference-tuned-summarizer")
text = "Summarize: Your input text here."

summary = summarizer(text, max_length=150, do_sample=False)
print(summary[0]['generated_text'])
```

---

## Files Included

- `pytorch_model.bin` - Model weights
- `config.json` - Model configuration
- Tokenizer files (`tokenizer.json`, `vocab.txt`, etc.)
- Model card and this README

---

## About DPO

Direct Preference Optimization is a fine-tuning technique that leverages preference-labeled datasets to directly optimize a model’s output preferences. This method improves alignment with human judgments beyond typical supervised fine-tuning.

---

## Evaluation Results

| Metric   | Base Summary (avg) | DPO Summary (avg) |
|----------|--------------------|-------------------|
| ROUGE-1  | 0.0442             | **0.2841**        |
| ROUGE-L  | 0.0366             | **0.2247**        |
| BLEU     | 0.0000             | **0.0286**        |

---

## Links

- [GitHub Repository](https://github.com/justthzz/preference-tuned-summarizer)  
- [Model on Hugging Face](https://huggingface.co/justthzz/preference-tuned-summarizer)

---

### Framework versions

- TRL: 0.20.0.dev0
- Transformers: 4.53.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.2

## Citations

Cite DPO as:

```bibtex
@inproceedings{rafailov2023direct,
    title        = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
    author       = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
    year         = 2023,
    booktitle    = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
    url          = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
    editor       = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```

Cite TRL as:
    
```bibtex
@misc{vonwerra2022trl,
	title        = {{TRL: Transformer Reinforcement Learning}},
	author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
	year         = 2020,
	journal      = {GitHub repository},
	publisher    = {GitHub},
	howpublished = {\url{https://github.com/huggingface/trl}}
}
```