peterJing commited on
Commit
838631e
·
verified ·
1 Parent(s): cbf157c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -3
README.md CHANGED
@@ -1,3 +1,93 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [ICCV 2023] MEFLUT: Unsupervised 1D Lookup Tables for Multi-exposure Image Fusion [Paper](https://arxiv.org/abs/2309.11847)
2
+
3
+ <h4 align="center">Ting Jiang<sup>1,*</sup>, Chuan Wang<sup>1</sup>, Xinpeng Li<sup>1</sup>, Ru Li<sup>1</sup>, Haoqiang Fan<sup>1</sup>, Shuaicheng Liu<sup>2,1,†</sup></center>
4
+ <h4 align="center"> 1. Megvii Research, 2. University of Electronic Science and Technology of China</center>
5
+ <h6 align="center"> †Corresponding author</center>
6
+
7
+
8
+ ## Abstract
9
+
10
+ In this paper, we introduce a new approach for high-quality multi-exposure image fusion (MEF). We show that the fusion weights of an exposure can be encoded into a 1D lookup table (LUT), which takes pixel intensity value as input and produces fusion weight as output. We learn one 1D LUT for each exposure, then all the pixels from different exposures can query 1D LUT of that exposure independently for high-quality and efficient fusion. Specifically, to learn these 1D LUTs, we involve attention mechanism in various dimensions including frame, channel and spatial ones into the MEF task so as to bring us significant quality improvement over the state-of-the-art (SOTA). In addition, we collect a new MEF dataset consisting of 960 samples, 155 of which are manually tuned by professionals as ground-truth for evaluation. Our network is trained by this dataset in an unsupervised manner. Extensive experiments are conducted to demonstrate the effectiveness of all the newly proposed components, and results show that our approach outperforms the SOTA in our and another representative dataset SICE, both qualitatively and quantitatively. Moreover, our 1D LUT approach takes less than 4ms to run a 4K image on a PC GPU. Given its high quality, efficiency and robustness, our method has been shipped into millions of Android mobiles across multiple brands world-wide.
11
+
12
+ ## Pipeline
13
+
14
+ ![pipeline](./assets/pipeline.png)
15
+
16
+ ## Our Dataset
17
+ The dataset including training and testing dataset can be download from [Baidu Netdisk](Link: https://pan.baidu.com/s/1F3Re_2CWYibsqejN-pA2tw?pwd=2tih 提取码: 2tih ) or [Huggingface](https://huggingface.co/datasets/peterJing/MEFLUT/tree/main). The dataset is organized as follow:
18
+
19
+ ```
20
+ Training set
21
+ |--train
22
+ | |--0 // Sample ID, internal file ranging from EV-4 to EV+2.
23
+ | | |--0.jpg
24
+ | | |--1.jpg
25
+ | | |--2.jpg
26
+ | | |--3.jpg
27
+ | |--1
28
+ | |--2
29
+ | |--3
30
+ | |--...
31
+ |--train.txt // Training set index.
32
+
33
+ Test set
34
+ |--test
35
+ | |--0 // Sample ID, internal file ranging from EV-4 to EV+2.
36
+ | | |--0.jpg
37
+ | | |--1.jpg
38
+ | | |--2.jpg
39
+ | | |--3.jpg
40
+ | |--1
41
+ | |--2
42
+ | |--3
43
+ | |--...
44
+ |--test.txt // Test set index.
45
+ ```
46
+ I write a data_selection.py code under the data_utils folder to handle the user can select 2, 3 or 4 frames to operate as needed. In the actual training, our experiments are also randomly selected, so there may be bias in the experimental results.
47
+
48
+ ## Usage
49
+ ### Requirements
50
+
51
+ This code is developed under
52
+
53
+ * Python 3.7.10
54
+ * Pytorch 1.12.1
55
+ * torchvision 0.13.1
56
+ * CUDA 10.0 on Ubuntu 18.04
57
+
58
+ We strongly recommend you using anaconda to ensure you can get the same results as us.
59
+
60
+ Install the require dependencies:
61
+
62
+ ```bash
63
+ conda create -n meflut python=3.7.10
64
+ conda activate meflut
65
+ pip install -r requirements.txt
66
+ ```
67
+
68
+ ## Usage
69
+ ### Training
70
+ ```
71
+ 1. cd MEFLUT
72
+ 2. python main.py --status 1dluts_train # execute in GPU
73
+ ```
74
+
75
+ ### Testing
76
+ ```
77
+ 1. cd MEFLUT
78
+ 2. python main.py --status 1dluts_eval # execute in GPU
79
+ ```
80
+ ## Citation
81
+
82
+ If you find this work helpful, please cite our paper:
83
+
84
+ ```
85
+ @InProceedings{Jiang_2023_ICCV,
86
+ author = {Jiang, Ting and Wang, Chuan and Li, Xinpeng and Li, Ru and Fan, Haoqiang and Liu, Shuaicheng},
87
+ title = {MEFLUT: Unsupervised 1D Lookup Tables for Multi-exposure Image Fusion},
88
+ booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
89
+ month = {October},
90
+ year = {2023},
91
+ pages = {10542-10551}
92
+ }
93
+ ```