File size: 1,870 Bytes
791963b
 
 
 
 
 
 
 
 
 
56f55c7
 
791963b
56f55c7
791963b
56f55c7
791963b
56f55c7
 
 
 
 
 
 
 
 
 
 
791963b
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
task_categories:
- image-to-image
license: cc-by-nc-4.0
tags:
- image-enhancement
- hdr
- multi-exposure
---

# UNICE Dataset Description

This is the dataset released with the paper: [UNICE: Training A Universal Image Contrast Enhancer](https://huggingface.co/papers/2507.17157).

The UNICE dataset is crucial for training a universal and generalized model for various image contrast enhancement tasks, free of costly human labeling. It comprises HDR raw images used to render multi-exposure sequences (MES) and corresponding pseudo sRGB ground-truths via multi-exposure fusion.

**Code:** [https://github.com/RuodaiCui/UNICE](https://github.com/RuodaiCui/UNICE)

## 1. `UNICEdataset.zip`
- **Type**: Multi-Exposure Sequences (MES)
- **Content**: sRGB images rendered from HDR raw images using an emulated ISP pipeline.
- **Structure**: Each sequence contains multiple images of the same scene with varying exposure values (EVs), from -3EV to +3EV.
- **Purpose**: Serves as input data for training and evaluating exposure and contrast enhancement models.

## 2. `pseudoGT.zip`
- **Type**: Pseudo Ground Truths
- **Content**: High-quality sRGB images generated by fusing the MES using an ensemble of multi-exposure fusion (MEF) techniques.
- **Purpose**: Used as the target output (pseudo-GT) for supervised training of enhancement models.

## Sample Usage

To download the dataset using Git LFS:

```bash
git lfs install
git clone https://huggingface.co/datasets/lahaina/UNICE
```

After downloading, you will find `UNICEdataset.zip` and `pseudoGT.zip`. For model training (e.g., as described in the associated code repository), you would typically extract these files and configure your `dataset_folder` to point to the extracted data. For instance, you might place the extracted contents into a directory like `data/exposure` and use it with the training scripts.