| # Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) | |
| **Description**: This repository contains the dataset for the D3PO method in this paper [Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model](https://arxiv.org/abs/2311.13231). The *d3po_dataset* file pertains to the image distortion experiment of the [`anything-v5`](https://huggingface.co/stablediffusionapi/anything-v5) model. | |
| The *text2img_dataset* comprises the images generated from the pretrained, preferred image fine-tuned, reward weighted fine-tuned and D3PO fine-tuned models in the prompt-image alignment experiment. | |
| **Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/). | |
| **Directory** | |
| - d3po_dataset | |
| - epoch1 | |
| - all_img | |
| - *.png | |
| - deformed_img | |
| - *.png | |
| - json | |
| - data.json (required for training) | |
| - prompt.json | |
| - sample.pkl(required for training) | |
| - epoch2` | |
| - ... | |
| - epoch5 | |
| - text2img_dataset: | |
| - img | |
| - data_*.json | |
| - plot.ipynb | |
| - prompt.txt | |
| **Citation** | |
| ``` | |
| @article{yang2023using, | |
| title={Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model}, | |
| author={Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Li, Qimai and Shen, Weihan and Zhu, Xiaolong and Li, Xiu}, | |
| journal={arXiv preprint arXiv:2311.13231}, | |
| year={2023} | |
| } | |
| ``` | |