| # Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) | |
| **Description**: The dataset for the image distortion experiment of the [`anything-v5`](https://huggingface.co/stablediffusionapi/anything-v5) model in the paper [Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model](https://arxiv.org/abs/2311.13231). | |
| **Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/tree/main). | |
| **Directory** | |
| - d3po_dataset | |
| - epoch1 | |
| - all_img | |
| - *.png | |
| - deformed_img | |
| - *.png | |
| - json | |
| - data.json (required for training) | |
| - prompt.json | |
| - sample.pkl(required for training) | |
| - epoch2 | |
| - ... | |
| - epoch5 | |
| **Citation** | |
| ``` | |
| @article{yang2023using, | |
| title={Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model}, | |
| author={Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Li, Qimai and Shen, Weihan and Zhu, Xiaolong and Li, Xiu}, | |
| journal={arXiv preprint arXiv:2311.13231}, | |
| year={2023} | |
| } | |
| ``` | |