Datasets:
video stringlengths 16 27 | scene stringclasses 30
values | algorithm stringclasses 16
values |
|---|---|---|
arch-01__CECF.mp4 | arch-01_ | CECF |
arch-01__FUnIEGAN.mp4 | arch-01_ | FUnIEGAN |
arch-01__HCLR-Net.mp4 | arch-01_ | HCLR-Net |
arch-01__PhysicalNN.mp4 | arch-01_ | PhysicalNN |
arch-01__SCNet.mp4 | arch-01_ | SCNet |
arch-01__SGUIE.mp4 | arch-01_ | SGUIE |
arch-01__STSC.mp4 | arch-01_ | STSC |
arch-01__Semi-UIR.mp4 | arch-01_ | Semi-UIR |
arch-01__U-Trans.mp4 | arch-01_ | U-Trans |
arch-01__UColor.mp4 | arch-01_ | UColor |
arch-01__UIE-DM.mp4 | arch-01_ | UIE-DM |
arch-01__UIE-WD.mp4 | arch-01_ | UIE-WD |
arch-01__UVE-Net.mp4 | arch-01_ | UVE-Net |
arch-01__UWNet.mp4 | arch-01_ | UWNet |
arch-01__WaterNet.mp4 | arch-01_ | WaterNet |
arch-01__ref.mp4 | arch-01_ | ref |
arch-02__CECF.mp4 | arch-02_ | CECF |
arch-02__FUnIEGAN.mp4 | arch-02_ | FUnIEGAN |
arch-02__HCLR-Net.mp4 | arch-02_ | HCLR-Net |
arch-02__PhysicalNN.mp4 | arch-02_ | PhysicalNN |
arch-02__SCNet.mp4 | arch-02_ | SCNet |
arch-02__SGUIE.mp4 | arch-02_ | SGUIE |
arch-02__STSC.mp4 | arch-02_ | STSC |
arch-02__Semi-UIR.mp4 | arch-02_ | Semi-UIR |
arch-02__U-Trans.mp4 | arch-02_ | U-Trans |
arch-02__UColor.mp4 | arch-02_ | UColor |
arch-02__UIE-DM.mp4 | arch-02_ | UIE-DM |
arch-02__UIE-WD.mp4 | arch-02_ | UIE-WD |
arch-02__UVE-Net.mp4 | arch-02_ | UVE-Net |
arch-02__UWNet.mp4 | arch-02_ | UWNet |
arch-02__WaterNet.mp4 | arch-02_ | WaterNet |
arch-02__ref.mp4 | arch-02_ | ref |
arch-03__CECF.mp4 | arch-03_ | CECF |
arch-03__FUnIEGAN.mp4 | arch-03_ | FUnIEGAN |
arch-03__HCLR-Net.mp4 | arch-03_ | HCLR-Net |
arch-03__PhysicalNN.mp4 | arch-03_ | PhysicalNN |
arch-03__SCNet.mp4 | arch-03_ | SCNet |
arch-03__SGUIE.mp4 | arch-03_ | SGUIE |
arch-03__STSC.mp4 | arch-03_ | STSC |
arch-03__Semi-UIR.mp4 | arch-03_ | Semi-UIR |
arch-03__U-Trans.mp4 | arch-03_ | U-Trans |
arch-03__UColor.mp4 | arch-03_ | UColor |
arch-03__UIE-DM.mp4 | arch-03_ | UIE-DM |
arch-03__UIE-WD.mp4 | arch-03_ | UIE-WD |
arch-03__UVE-Net.mp4 | arch-03_ | UVE-Net |
arch-03__UWNet.mp4 | arch-03_ | UWNet |
arch-03__WaterNet.mp4 | arch-03_ | WaterNet |
arch-03__ref.mp4 | arch-03_ | ref |
arch-04__CECF.mp4 | arch-04_ | CECF |
arch-04__FUnIEGAN.mp4 | arch-04_ | FUnIEGAN |
arch-04__HCLR-Net.mp4 | arch-04_ | HCLR-Net |
arch-04__PhysicalNN.mp4 | arch-04_ | PhysicalNN |
arch-04__SCNet.mp4 | arch-04_ | SCNet |
arch-04__SGUIE.mp4 | arch-04_ | SGUIE |
arch-04__STSC.mp4 | arch-04_ | STSC |
arch-04__Semi-UIR.mp4 | arch-04_ | Semi-UIR |
arch-04__U-Trans.mp4 | arch-04_ | U-Trans |
arch-04__UColor.mp4 | arch-04_ | UColor |
arch-04__UIE-DM.mp4 | arch-04_ | UIE-DM |
arch-04__UIE-WD.mp4 | arch-04_ | UIE-WD |
arch-04__UVE-Net.mp4 | arch-04_ | UVE-Net |
arch-04__UWNet.mp4 | arch-04_ | UWNet |
arch-04__WaterNet.mp4 | arch-04_ | WaterNet |
arch-04__ref.mp4 | arch-04_ | ref |
arch-05__CECF.mp4 | arch-05_ | CECF |
arch-05__FUnIEGAN.mp4 | arch-05_ | FUnIEGAN |
arch-05__HCLR-Net.mp4 | arch-05_ | HCLR-Net |
arch-05__PhysicalNN.mp4 | arch-05_ | PhysicalNN |
arch-05__SCNet.mp4 | arch-05_ | SCNet |
arch-05__SGUIE.mp4 | arch-05_ | SGUIE |
arch-05__STSC.mp4 | arch-05_ | STSC |
arch-05__Semi-UIR.mp4 | arch-05_ | Semi-UIR |
arch-05__U-Trans.mp4 | arch-05_ | U-Trans |
arch-05__UColor.mp4 | arch-05_ | UColor |
arch-05__UIE-DM.mp4 | arch-05_ | UIE-DM |
arch-05__UIE-WD.mp4 | arch-05_ | UIE-WD |
arch-05__UVE-Net.mp4 | arch-05_ | UVE-Net |
arch-05__UWNet.mp4 | arch-05_ | UWNet |
arch-05__WaterNet.mp4 | arch-05_ | WaterNet |
arch-05__ref.mp4 | arch-05_ | ref |
arch-06__CECF.mp4 | arch-06_ | CECF |
arch-06__FUnIEGAN.mp4 | arch-06_ | FUnIEGAN |
arch-06__HCLR-Net.mp4 | arch-06_ | HCLR-Net |
arch-06__PhysicalNN.mp4 | arch-06_ | PhysicalNN |
arch-06__SCNet.mp4 | arch-06_ | SCNet |
arch-06__SGUIE.mp4 | arch-06_ | SGUIE |
arch-06__STSC.mp4 | arch-06_ | STSC |
arch-06__Semi-UIR.mp4 | arch-06_ | Semi-UIR |
arch-06__U-Trans.mp4 | arch-06_ | U-Trans |
arch-06__UColor.mp4 | arch-06_ | UColor |
arch-06__UIE-DM.mp4 | arch-06_ | UIE-DM |
arch-06__UIE-WD.mp4 | arch-06_ | UIE-WD |
arch-06__UVE-Net.mp4 | arch-06_ | UVE-Net |
arch-06__UWNet.mp4 | arch-06_ | UWNet |
arch-06__WaterNet.mp4 | arch-06_ | WaterNet |
arch-06__ref.mp4 | arch-06_ | ref |
arch-07__CECF.mp4 | arch-07_ | CECF |
arch-07__FUnIEGAN.mp4 | arch-07_ | FUnIEGAN |
arch-07__HCLR-Net.mp4 | arch-07_ | HCLR-Net |
arch-07__PhysicalNN.mp4 | arch-07_ | PhysicalNN |
Dataset Card for UVE-Subjective-Benchmark
π Dataset Summary
The evaluation of Underwater Video Enhancement (UVE) remains a significant challenge due to the complex visual degradations inherent to aquatic environments and the temporal instability frequently introduced by frame-wise processing. Existing objective metrics (e.g., UIQM, UCIQE) often correlate poorly with human subjective judgement, particularly regarding dynamic artifacts like flickering and color shifts.
This dataset provides a validated, large-scale subjective benchmark for UVE. It captures the genuine spatial-temporal trade-offs made by the Human Visual System (HVS), offering a mathematically sound perceptual ground truth for evaluating enhancement algorithms and training future Video Quality Assessment (VQA) models.
π Dataset Structure
The repository is structured to provide both the visual stimuli and the exhaustive numerical scoring matrices required for comprehensive analysis.
1. Video Data
- Standardized Sequences: A complete set of standardized underwater video sequences.
- Model Outputs: Includes the original degraded reference videos and the comprehensive enhanced outputs generated by 15 representative deep learning models.
- Taxonomy Covered: Evaluated models span multiple architectural families, including physics-based priors, CNNs, GANs, Vision Transformers, and a dedicated video architecture.
2. Evaluation Matrices
- Objective Metrics: Frame-by-frame UIQM and UCIQE objective scores for all enhanced sequences.
- Aggregated Data: Video-level and algorithm-level average scores.
- Subjective Ground Truth: Reconstructed subjective Bradley-Terry (BT) scores and all associated statistical metrics derived from human preference data.
π¬ Methodology
To neutralize observer scale drift and effectively isolate dynamic artifacts, the human preference data was collected utilizing a strict, forced-choice Pair Comparison (PC) methodology. A bespoke, zero-latency web-based evaluation platform was engineered to facilitate synchronized, high-resolution video playback and collect robust PC data.
The discrete, binary human choices were then mathematically transformed into a continuous perceptual quality ranking using the Bradley-Terry (BT) probabilistic model and Maximum Likelihood Estimation (MLE).
π Citation
If you use this dataset in your research, please cite the underlying work:
@article{Wang2026UVE,
title={Quality Assessment on Enhanced Underwater Video},
author={Wang, Tianshuo},
year={2026},
institution={Central South University & University of Dundee}
}
- Downloads last month
- 11