Improve model card for MaskDCPT with abstract, results, usage, and BasicSR library_name

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +112 -3
README.md CHANGED
@@ -1,13 +1,122 @@
1
  ---
2
  license: cc-by-nc-4.0
 
3
  tags:
4
  - image-restoration
5
  - all-in-one image restoration
6
  - unified image restoration
7
  - universal image restoration
8
- pipeline_tag: image-to-image
9
  ---
10
 
11
- Paper: https://arxiv.org/abs/2510.13282
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- Inference code: https://github.com/MILab-PKU/MaskDCPT
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ pipeline_tag: image-to-image
4
  tags:
5
  - image-restoration
6
  - all-in-one image restoration
7
  - unified image restoration
8
  - universal image restoration
9
+ library_name: basicsr
10
  ---
11
 
12
+ # Universal Image Restoration Pre-training via Masked Degradation Classification (MaskDCPT)
13
+
14
+ **Paper**: [Universal Image Restoration Pre-training via Masked Degradation Classification](https://arxiv.org/abs/2510.13282)
15
+ **Code / Project Page**: [https://github.com/MILab-PKU/MaskDCPT](https://github.com/MILab-PKU/MaskDCPT)
16
+
17
+ ## Abstract
18
+ This study introduces a Masked Degradation Classification Pre-Training method (MaskDCPT), designed to facilitate the classification of degradation types in input images, leading to comprehensive image restoration pre-training. Unlike conventional pre-training methods, MaskDCPT uses the degradation type of the image as an extremely weak supervision, while simultaneously leveraging the image reconstruction to enhance performance and robustness. MaskDCPT includes an encoder and two decoders: the encoder extracts features from the masked low-quality input image. The classification decoder uses these features to identify the degradation type, whereas the reconstruction decoder aims to reconstruct a corresponding high-quality image. This design allows the pre-training to benefit from both masked image modeling and contrastive learning, resulting in a generalized representation suited for restoration tasks. Benefit from the straightforward yet potent MaskDCPT, the pre-trained encoder can be used to address universal image restoration and achieve outstanding performance. Implementing MaskDCPT significantly improves performance for both convolution neural networks (CNNs) and Transformers, with a minimum increase in PSNR of 3.77 dB in the 5D all-in-one restoration task and a 34.8% reduction in PIQE compared to baseline in real-world degradation scenarios. It also emergences strong generalization to previously unseen degradation types and levels. In addition, we curate and release the UIR-2.5M dataset, which includes 2.5 million paired restoration samples across 19 degradation types and over 200 degradation levels, incorporating both synthetic and real-world data. The dataset, source code, and models are available at this https URL .
19
+
20
+ ## Model Overview
21
+ MaskDCPT is a novel approach for universal image restoration. It employs a Masked Degradation Classification Pre-Training method (MaskDCPT) to classify degradation types in input images, coupled with image reconstruction to boost performance and robustness. The architecture features an encoder for feature extraction from masked low-quality inputs, a classification decoder to identify degradation types, and a reconstruction decoder to generate high-quality images. This design effectively combines masked image modeling and contrastive learning, resulting in a robust and generalized representation suitable for a wide array of restoration tasks.
22
+
23
+ ![The oversall pipeline of MaskDCPT.](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_pipeline.png)
24
+
25
+ ## Results
26
+ MaskDCPT demonstrates outstanding performance across various image restoration tasks, achieving significant improvements in PSNR and PIQE. Below are some visual results from the paper:
27
+
28
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_figures.png)
29
+
30
+ <details>
31
+ <summary><strong>5D All-in-one Image Restoration</strong> (click to expand) </summary>
32
+
33
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_5d.png)
34
+
35
+ </details>
36
+
37
+ <details>
38
+ <summary><strong>12D All-in-one Image Restoration</strong> (click to expand) </summary>
39
+
40
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_12d.png)
41
+
42
+ </details>
43
+
44
+ <details>
45
+ <summary><strong>Out-of-distribution Image Restoration (Gaussian Denoising)</strong> (click to expand) </summary>
46
+
47
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_ooddn.png)
48
+
49
+ </details>
50
+
51
+ <details>
52
+ <summary><strong>Real-world All-in-one Image Restoration</strong> (click to expand) </summary>
53
+
54
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_realworld.png)
55
+
56
+ </details>
57
+
58
+ <details>
59
+ <summary><strong>Mixed Degradation Image Restoration</strong> (click to expand) </summary>
60
+
61
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_mixed.png)
62
+
63
+ </details>
64
+
65
+ ## UIR-2.5M Dataset
66
+ We introduce and release the UIR-2.5M dataset, the largest universal image restoration dataset to date. This dataset features:
67
+ - **2.5 million** image pairs.
68
+ - **19** degradation types.
69
+ - **more than 200** degradation levels, incorporating both synthetic and real-world data.
70
+
71
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_dataset.png)
72
+
73
+ <details>
74
+ <summary><strong>Details about the UIR-2.5M Dataset.</strong> (click to expand) </summary>
75
+
76
+ ![](https://github.com/MILab-PKU/MaskDCPT/raw/main/assets/maskdcpt_dataset_table.png)
77
+
78
+ </details>
79
+
80
+ ## Quick Start / Sample Usage
81
+
82
+ ### Setup
83
+ Clone the repository and set up the environment:
84
+
85
+ ```shell
86
+ git clone https://github.com/MILab-PKU/MaskDCPT.git
87
+ cd MaskDCPT
88
+ ```
89
+
90
+ You can also create a new environment to avoid conflicts:
91
+
92
+ ```
93
+ conda env create -f environment.yml
94
+ conda activate maskdcpt
95
+ ```
96
+
97
+ > !!! Remember to remove `basicsr` if it's already in your python environment to avoid conflicts !!!
98
+
99
+ ### Test / Inference
100
+ Download [**pretrained models**](https://huggingface.co/Jiakui/MaskDCPT) and place them in the `./pretrained_models` directory.
101
+
102
+ To reproduce the results or run inference, please modify the dataset path and model path in the configuration file, then execute:
103
+
104
+ ```shell
105
+ python basicsr/test.py -opt options/all_in_one/test/test_NAFNet_5d.yml
106
+ # You can modify the path of the config file after `-opt` for different tests.
107
+ ```
108
+
109
+ ## Citation
110
+ If you find this repository useful, please consider giving a star ⭐ and citing our work:
111
 
112
+ ```bibtex
113
+ @misc{hu2025universal,
114
+ title={Universal Image Restoration Pre-training via Masked Degradation Classification},
115
+ author={JiaKui Hu and Zhengjian Yao and Lujia Jin and Yinghao Chen and Yanye Lu},
116
+ year={2025},
117
+ eprint={2510.13282},
118
+ archivePrefix={arXiv},
119
+ primaryClass={cs.CV},
120
+ url={https://arxiv.org/abs/2510.13282},
121
+ }
122
+ ```