d3po_datasets / README.md
yangkaiSIGS's picture
Update README.md
dae299e
|
raw
history blame
764 Bytes

Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)

Description: The dataset for the image distortion experiment of the anything-v5 model in the paper Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model.

Source Code: The code used to generate this data can be found here.

Directory

  • d3po_dataset
    • epoch1
      • all_img
        • *.png
      • deformed_img
        • *.png
      • json
        • data.json (required for training)
      • prompt.json
      • sample.pkl(required for training)
    • epoch2
    • ...
    • epoch5