File size: 1,323 Bytes
dae299e
cdc3446
 
da12324
 
cdc3446
 
 
587e910
cdc3446
 
 
 
 
 
 
 
 
da12324
cdc3446
 
cc28a24
da12324
 
 
 
 
 
 
cc28a24
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)

**Description**: The dataset for the image distortion experiment of the [`anything-v5`](https://huggingface.co/stablediffusionapi/anything-v5) model in the paper [Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model](https://arxiv.org/abs/2311.13231).
                (2024.1.22 Update: Add the dataset for evaluating text and image alignment before and after fine-tuning.)
                
**Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/tree/main).

**Directory**
- d3po_dataset
    - epoch1
        - all_img
          - *.png
        - deformed_img
          - *.png
        - json
          - data.json (required for training)
        - prompt.json
        - sample.pkl(required for training)
    - epoch2`
    - ...
    - epoch5

  
- text2img_dataset:
  - img
  - data_*.json
  - plot.ipynb
  - prompt.txt

**Citation**
```
@article{yang2023using,
  title={Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model},
  author={Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Li, Qimai and Shen, Weihan and Zhu, Xiaolong and Li, Xiu},
  journal={arXiv preprint arXiv:2311.13231},
  year={2023}
}
```