Add metadata for license, library_name, and pipeline_tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +22 -11
README.md CHANGED
@@ -1,3 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
1
  <div align="center">
2
  <h2>GDPO-SR: Group Direct Preference Optimization for One-Step Generative Image Super-Resolution</h2>
3
 
@@ -11,8 +22,9 @@
11
  <sup>1</sup>The Hong Kong Polytechnic University, <sup>2</sup>OPPO Research Institute
12
  </div>
13
 
14
- [![](https://img.shields.io/badge/ArXiv%20-Paper-b31b1b?logo=arxiv&logoColor=red)](https://arxiv.org/pdf/2603.16769)&nbsp; [![weights](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-model%20weights-blue)](https://huggingface.co/Joypop/GDPO/tree/main)
15
 
 
16
 
17
  ## ⏰ Update
18
  - **2026.3.19**: Paper is released on [ArXiv](https://arxiv.org/pdf/2603.16769).
@@ -41,7 +53,7 @@ pip install -r requirements.txt
41
 
42
  #### Step 2: Prepare testing data and run testing command
43
  You can modify input_path and output_path to run testing command. The input_path is the path of the test image and the output_path is the path where the output images are saved.
44
- ```
45
  CUDA_VISIBLE_DEVICES=0, python GDPOSR/inferences/test.py \
46
  --input_path test_LR \
47
  --output_path experiment/GDPOSR \
@@ -55,31 +67,31 @@ CUDA_VISIBLE_DEVICES=0, python GDPOSR/inferences/test.py \
55
  --time_step_noise=250
56
  ```
57
  or
58
- ```
59
  bash scripts/test/test.sh
60
  ```
61
 
62
  ## πŸš„ Training Phase
63
 
64
  ### Step1: Prepare training data
65
- Download the [OpenImage dataset](https://storage.googleapis.com/openimages/web/index.html) and [LSIDR dataset](https://github.com/ofsoundof/LSDIR). For each image in the LSDIR dataset, crop multiple 512Γ—512 image patches using a sliding window with a stride of 64 pixels;
66
 
67
 
68
  ### Step2: Train NAOSD.
69
- ```
70
  bash scripts/train/train_NAOSD.sh
71
  ```
72
- The hyperparameters in train_NAOSD.sh can be modified to suit different experimental settings. Besides, after training with NAOSD, you can use GDPOSR/mergelora.py to merge the LoRA into the UNet and VAE as base model for subsequent reinforcement learning training and inference.
73
 
74
  ### Step3: Train GDPO-SR
75
- ```
76
  bash scripts/train/train_GDPOSR.sh
77
  ```
78
- The hyperparameters in train_GDPOSR.sh can be modified to suit different experimental settings. Besides, after training with GDPO-SR, you can use GDPOSR/mergelora.py to merge the LoRA into the UNet for subsequent inference.
79
 
80
  ## πŸ”— Citations
81
 
82
- ```
83
  @article{yi2026gdpo,
84
  title={GDPO-SR: Group Direct Preference Optimization for One-Step Generative Image Super-Resolution},
85
  author={Yi, Qiaosi and Li, Shuai and Wu, Rongyuan and Sun, Lingchen and Zhang, Zhengqiang and Zhang, Lei},
@@ -92,5 +104,4 @@ The hyperparameters in train_GDPOSR.sh can be modified to suit different experim
92
  This project is released under the [Apache 2.0 license](LICENSE).
93
 
94
  ## πŸ“§ Contact
95
- If you have any questions, please contact: qiaosiyijoyies@gmail.com
96
-
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: diffusers
4
+ pipeline_tag: image-to-image
5
+ tags:
6
+ - super-resolution
7
+ - image-restoration
8
+ - dpo
9
+ - one-step-generation
10
+ ---
11
+
12
  <div align="center">
13
  <h2>GDPO-SR: Group Direct Preference Optimization for One-Step Generative Image Super-Resolution</h2>
14
 
 
22
  <sup>1</sup>The Hong Kong Polytechnic University, <sup>2</sup>OPPO Research Institute
23
  </div>
24
 
25
+ [![](https://img.shields.io/badge/ArXiv%20-Paper-b31b1b?logo=arxiv&logoColor=red)](https://huggingface.co/papers/2603.16769)&nbsp; [![weights](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-model%20weights-blue)](https://huggingface.co/Joypop/GDPO/tree/main)
26
 
27
+ This repository contains the weights for GDPO-SR, presented in the paper [GDPO-SR: Group Direct Preference Optimization for One-Step Generative Image Super-Resolution](https://huggingface.co/papers/2603.16769).
28
 
29
  ## ⏰ Update
30
  - **2026.3.19**: Paper is released on [ArXiv](https://arxiv.org/pdf/2603.16769).
 
53
 
54
  #### Step 2: Prepare testing data and run testing command
55
  You can modify input_path and output_path to run testing command. The input_path is the path of the test image and the output_path is the path where the output images are saved.
56
+ ```shell
57
  CUDA_VISIBLE_DEVICES=0, python GDPOSR/inferences/test.py \
58
  --input_path test_LR \
59
  --output_path experiment/GDPOSR \
 
67
  --time_step_noise=250
68
  ```
69
  or
70
+ ```shell
71
  bash scripts/test/test.sh
72
  ```
73
 
74
  ## πŸš„ Training Phase
75
 
76
  ### Step1: Prepare training data
77
+ Download the [LSIDR dataset](https://github.com/ofsoundof/LSDIR) and [FFHQ dataset](https://github.com/NVlabs/ffhq-dataset) and crop multiple 512Γ—512 image patches using a sliding window with a stride of 64 pixels;
78
 
79
 
80
  ### Step2: Train NAOSD.
81
+ ```shell
82
  bash scripts/train/train_NAOSD.sh
83
  ```
84
+ The hyperparameters in train_NAOSD.sh can be modified to suit different experimental settings. Besides, after training with NAOSD, you can use `GDPOSR/mergelora.py` to merge the LoRA into the UNet and VAE as base model for subsequent reinforcement learning training and inference.
85
 
86
  ### Step3: Train GDPO-SR
87
+ ```shell
88
  bash scripts/train/train_GDPOSR.sh
89
  ```
90
+ The hyperparameters in train_GDPOSR.sh can be modified to suit different experimental settings. Besides, after training with GDPO-SR, you can use `GDPOSR/mergelora.py` to merge the LoRA into the UNet for subsequent inference.
91
 
92
  ## πŸ”— Citations
93
 
94
+ ```bibtex
95
  @article{yi2026gdpo,
96
  title={GDPO-SR: Group Direct Preference Optimization for One-Step Generative Image Super-Resolution},
97
  author={Yi, Qiaosi and Li, Shuai and Wu, Rongyuan and Sun, Lingchen and Zhang, Zhengqiang and Zhang, Lei},
 
104
  This project is released under the [Apache 2.0 license](LICENSE).
105
 
106
  ## πŸ“§ Contact
107
+ If you have any questions, please contact: qiaosiyijoyies@gmail.com