Image-to-Image

Improve model card for DarkIR with metadata, links, and usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +162 -3
README.md CHANGED
@@ -1,3 +1,162 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ pipeline_tag: image-to-image
4
+ ---
5
+
6
+ # DarkIR: Robust Low-Light Image Restoration
7
+
8
+ [![Hugging Face](https://img.shields.io/badge/Demo-%F0%9F%A4%97%20Hugging%20Face-blue)](https://huggingface.co/spaces/Cidaut/DarkIR)
9
+ [![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2412.13443)
10
+ [![Paper](https://img.shields.io/badge/Paper-%F0%9F%A4%97%20Hugging%20Face-purple)](https://huggingface.co/papers/2412.13443)
11
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-blue?logo=github)](https://github.com/cidautai/DarkIR)
12
+
13
+ **[Daniel Feijoo](https://scholar.google.com/citations?hl=en&user=hqbPn4YAAAAJ), [Juan C. Benito](https://scholar.google.com/citations?hl=en&user=f186MIUAAAAJ), [Alvaro Garcia](https://scholar.google.com/citations?hl=en&user=c6SJPnMAAAAJ), [Marcos V. Conde](https://scholar.google.com/citations?user=NtB1kjYAAAAJ&hl=en)** (CIDAUT AI and University of Wuerzburg)
14
+
15
+ 🚀 The model was presented at CVPR 2025. Try the model for free in 🤗 [HuggingFace Spaces: DarkIR](https://huggingface.co/spaces/Cidaut/DarkIR) and download [model weights/checkpoint](https://cidautes-my.sharepoint.com/:f:/g/personal/alvgar_cidaut_es/Epntbl4SucFNpeIT_jyYZ-cB9BamMbacbyq_svrkMCpShA?e=XB9YBB).
16
+
17
+ **TLDR.** In low-light conditions, you have noise and blur in the images, yet, previous methods cannot tackle dark noisy images and dark blurry using a single model. We propose the first approach for all-in-one low-light restoration including illumination, noisy and blur enhancement.
18
+
19
+ ### Abstract
20
+ Photography during night or in dark conditions typically suffers from noise, low light and blurring issues due to the dim environment and the common use of long exposure. Although Deblurring and Low-light Image Enhancement (LLIE) are related under these conditions, most approaches in image restoration solve these tasks separately. In this paper, we present an efficient and robust neural network for multi-task low-light image restoration. Instead of following the current tendency of Transformer-based models, we propose new attention mechanisms to enhance the receptive field of efficient CNNs. Our method reduces the computational costs in terms of parameters and MAC operations compared to previous methods. Our model, DarkIR, achieves new state-of-the-art results on the popular LOLBlur, LOLv2 and Real-LOLBlur datasets, being able to generalize on real-world night and dark images.
21
+
22
+ ---
23
+
24
+ | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/teaser/0085_low.png" alt="Low-light w/ blur" width="450"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/teaser/0085_retinexformer.png" alt="RetinexFormer" width="450"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/teaser/0085_darkir.png" alt="DarkIR (ours)" width="450"> |
25
+ |:-------------------------:|:-------------------------:|:-------------------------:|
26
+ | Low-light w/ blur | RetinexFormer | **DarkIR** (ours) |
27
+ | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/teaser/low00747.png" alt="Low-light w/o blur" width="450"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/teaser/low00747_lednet.png" alt="LEDNet" width="450"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/teaser/low00747_darkir.png" alt="DarkIR (ours)" width="450"> |
28
+ | Low-light w/o blur | LEDNet | **DarkIR** (ours) |
29
+
30
+ &nbsp;
31
+
32
+ ## Network Architecture
33
+
34
+ ![Network Architecture](https://github.com/cidautai/DarkIR/raw/main/assets/networks-scheme.png)
35
+
36
+ ## Dependencies and Installation
37
+
38
+ - Python == 3.10.12
39
+ - PyTorch == 2.5.1
40
+ - CUDA == 12.4
41
+ - Other required packages in `requirements.txt`
42
+
43
+ ```bash
44
+ # git clone this repository
45
+ git clone https://github.com/Fundacion-Cidaut/DarkIR.git
46
+ cd DarkIR
47
+
48
+ # create python environment
49
+ python3 -m venv venv_DarkIR
50
+ source venv_DarkIR/bin/activate
51
+
52
+ # install python dependencies
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+ ## Datasets
57
+ The datasets used for training and/or evaluation are:
58
+
59
+ |Dataset | Sets of images | Source |
60
+ | -----------| :---------------:|------|
61
+ |LOL-Blur | 10200 training pairs / 1800 test pairs| [LEDNet](https://github.com/sczhou/LEDNet) |
62
+ |LOLv2-real | 689 training pairs / 100 test pairs | [Google Drive](https://drive.google.com/file/d/1dzuLCk9_gE2bFF222n3-7GVUlSVHpMYC/view) |
63
+ |LOLv2-synth | 900 training pairs / 100 test pairs | [Google Drive](https://drive.google.com/file/d/1dzuLCk9_gE2bFF222n3-7GVUlSVHpMYC/view) |
64
+ |LOL | 485 training pairs / 15 test pairs | [Official Site](https://daooshee.github.io/BMVC2018website/) |
65
+ |Real-LOLBlur | 1354 unpaired images | [LEDNet](https://github.com/sczhou/LEDNet) |
66
+ |LSRW-Nikon | 3150 training pairs / 20 test pairs | [R2RNet](https://github.com/JianghaiSCU/R2RNet) |
67
+ |LSRW-Huawei | 2450 training pairs / 30 test pairs | [R2RNet](https://github.com/JianghaiSCU/R2RNet) |
68
+
69
+ You can download each specific dataset and put it on the `/data/datasets` folder for testing.
70
+
71
+ ## Results
72
+ We present results in different datasets for DarkIR of different sizes. While **DarkIR-m** has channel depth of 32, 3.31 M parameters and 7.25 GMACs, **DarkIR-l** has channel depth 64, 12.96 M parameters and 27.19 GMACs.
73
+
74
+ |Dataset | Model| PSNR| SSIM | LPIPS |
75
+ | -----------| :---------------:|:------:|------|------|
76
+ |LOL-Blur | DarkIR-m| 27.00| 0.883| 0.162|
77
+ | | DarkIR-l| 27.30| 0.898| 0.137|
78
+ |LOLv2-real | DarkIR-m| 23.87| 0.880| 0.186|
79
+ |LOLv2-synth | DarkIR-m| 25.54| 0.934| 0.058|
80
+ |LSRW-Both | DarkIR-m| 18.93| 0.583| 0.412|
81
+
82
+ We present perceptual metrics for Real-LOLBlur dataset:
83
+
84
+ | Model| MUSIQ| NRQM | NIQE |
85
+ | -----------| :---------------:|:------:|:------:|
86
+ | DarkIR-m| 48.36| 4.983| 4.998|
87
+ | DarkIR-l| 48.79| 4.917| 5.051|
88
+
89
+ > LOLBlur results were obtained training the network only in this dataset. Best results in LOLv2-real, LOLv2-synth and both LSRW were obtained in a multitask training of the three datasets with LOLBlur (getting 26.63 PSNR and 0.875 SSIM in this dataset). Finally Real-LOLBlur results were obtained with a model trained in LOLBlur.
90
+
91
+ In addition, we tested our **DarkIR-m** in Real-World LLIE unpaired Datasets (downloaded from [Drive](https://drive.google.com/drive/folders/0B_FjaR958nw_djVQanJqeEhUM1k?usp=sharing)):
92
+
93
+ | | DICM| MEF | LIME | NPE | VV |
94
+ | -----------| :---------------:|:------:|:------:|:------:|:------:|
95
+ | BRISQUE| 18.688| 13.903| 21.62| 12.877| 26.87|
96
+ | NIQE| 3.759| 3.448| 4.074| 3.991| 3.74|
97
+
98
+ ## Evaluation
99
+
100
+ To check our results you could run the evaluation of DarkIR in each of the datasets:
101
+
102
+ - Download the weights of the model from [OneDrive](https://cidautes-my.sharepoint.com/:f:/g/personal/alvgar_cidaut_es/Epntbl4SucFNpeIT_jyYZ-cB9BamMbacbyq_svrkMCpShA?e=XB9YBB) and put them in `/models`.
103
+ - run `python testing.py -p ./options/test/<config.yml>`. Default is LOLBlur.
104
+
105
+ > You may also check the qualitative results in `Real-LOLBlur` and LLIE unpaired by running `python testing_unpaired.py -p ./options/test/<config.yml>`. Default is RealBlur.
106
+
107
+ ## Inference
108
+
109
+ You can restore a whole set of images in a folder by running:
110
+
111
+ ```bash
112
+ python inference.py -i <folder_path>
113
+ ```
114
+
115
+ Restored images will be saved in `./images/results`.
116
+
117
+ To inference a video you can run
118
+
119
+ ```bash
120
+ python inference_video.py -i /path/to/video.mp4
121
+ ```
122
+
123
+ which will be saved in `./videos/results`.
124
+
125
+ ## Gallery
126
+
127
+ <p align="center"> <strong> LOLv2-real </strong> </p>
128
+
129
+ | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2real/low00733_low.png" alt="Low-light" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2real/00733_snr.png" alt="SNR-Net" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2real/low00733_retinexformer.png" alt="RetinexFormer" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2real/low00733_darkir.png" alt="DarkIR (ours)" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2real/normal00733.png" alt="Ground Truth" width="300"> |
130
+ |:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|
131
+ | Low-light | SNR-Net | RetinexFormer | **DarkIR** (ours) | Ground Truth |
132
+
133
+ <p align="center"> <strong> LOLv2-synth </strong> </p>
134
+
135
+ | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2synth/r13073518t_low.png" alt="Low-light" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2synth/r13073518t_snr.png" alt="SNR-Net" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2synth/r13073518t_retinexformer.png" alt="RetinexFormer" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2synth/r13073518t_darkir.png" alt="DarkIR (ours)" width="300"> | <img src="https://github.com/cidautai/DarkIR/raw/main/assets/lolv2synth/r13073518t_normal.png" alt="Ground Truth" width="300"> |
136
+ |:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|
137
+ | Low-light | SNR-Net | RetinexFormer | **DarkIR** (ours) | Ground Truth |
138
+
139
+ &nbsp;
140
+
141
+ <p align="center"> <strong> Real-LOLBlur-Night </strong> </p>
142
+
143
+ <p align="center"> <img src="https://github.com/cidautai/DarkIR/raw/main/assets/qualis_realblur_night.jpg" alt="Example Image" width="70%"> </p>
144
+
145
+ ## Citation and acknowledgement
146
+
147
+ This work has been accepted for publication and presentation at The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025.
148
+
149
+ ```bibtex
150
+ @InProceedings{Feijoo_2025_CVPR,
151
+ author = {Feijoo, Daniel and Benito, Juan C. and Garcia, Alvaro and Conde, Marcos V.},
152
+ title = {DarkIR: Robust Low-Light Image Restoration},
153
+ booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
154
+ month = {June},
155
+ year = {2025},
156
+ pages = {10879-10889}
157
+ }
158
+ ```
159
+
160
+ ## Contact
161
+
162
+ If you have any questions, please contact danfei@cidaut.es and marcos.conde@uni-wuerzburg.de