Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pretty_name: LEHA-CVQAD
|
| 3 |
+
tags:
|
| 4 |
+
- video
|
| 5 |
+
- computer-vision
|
| 6 |
+
- compression
|
| 7 |
+
- video-quality-assessment
|
| 8 |
+
- subjective-quality
|
| 9 |
+
- benchmark
|
| 10 |
+
size_categories:
|
| 11 |
+
- 1K<n<10K
|
| 12 |
+
license: apache-2.0
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# Dataset Card for LEHA-CVQAD
|
| 16 |
+
|
| 17 |
+
## Dataset Summary
|
| 18 |
+
|
| 19 |
+
LEHA-CVQAD is a large-scale dataset for **compressed video quality assessment**. It is designed for benchmarking and training both **full-reference (FR)** and **no-reference (NR)** video quality assessment methods on modern compression artifacts.
|
| 20 |
+
|
| 21 |
+
The dataset combines:
|
| 22 |
+
- **diverse source content**, including both professionally produced material and user-generated content,
|
| 23 |
+
- **modern compression standards and codec presets**,
|
| 24 |
+
- **pairwise preference annotations** converted into ranking scores,
|
| 25 |
+
- **MOS / DMOS annotations** on a selected subset,
|
| 26 |
+
- and a public **open split** plus a **hidden split** used for blind benchmark evaluation.
|
| 27 |
+
|
| 28 |
+
This repository contains the **public open part** of the dataset. The hidden part is not released publicly and is used for benchmark evaluation to reduce overfitting.
|
| 29 |
+
|
| 30 |
+
- Paper: https://arxiv.org/abs/2507.03990
|
| 31 |
+
- Earlier dataset / methodology paper: https://arxiv.org/abs/2211.12109
|
| 32 |
+
- Benchmark page: https://videoprocessing.ai/benchmarks/video-quality-metrics.html
|
| 33 |
+
- Dataset / project page: https://videoprocessing.ai/datasets/cvqad.html
|
| 34 |
+
|
| 35 |
+
## Supported Tasks and Leaderboards
|
| 36 |
+
|
| 37 |
+
This dataset can be used for:
|
| 38 |
+
|
| 39 |
+
1. **No-reference video quality assessment**
|
| 40 |
+
- Predict perceptual quality from a distorted video alone.
|
| 41 |
+
|
| 42 |
+
2. **Full-reference video quality assessment**
|
| 43 |
+
- Predict perceptual quality from a distorted video and its pristine reference.
|
| 44 |
+
|
| 45 |
+
3. **Pairwise ranking / preference learning**
|
| 46 |
+
- Learn relative quality ordering between compressed variants of the same source content.
|
| 47 |
+
|
| 48 |
+
4. **Quality regression**
|
| 49 |
+
- Predict MOS, DMOS, Bradley-Terry scores, Elo scores, or a fused subjective score.
|
| 50 |
+
|
| 51 |
+
5. **Codec / rate-distortion optimization research**
|
| 52 |
+
- Study how objective metrics align with human preference under bitrate constraints.
|
| 53 |
+
|
| 54 |
+
Benchmark results for many IQA/VQA metrics are reported on the MSU benchmark website.
|
| 55 |
+
|
| 56 |
+
## Languages
|
| 57 |
+
|
| 58 |
+
The dataset is visual. Spoken language is not a primary annotation axis.
|
| 59 |
+
|
| 60 |
+
## Dataset Structure
|
| 61 |
+
|
| 62 |
+
### Data Instances
|
| 63 |
+
|
| 64 |
+
A typical dataset instance represents one compressed video and its metadata.
|
| 65 |
+
Replace field names below with the exact keys used in your CSV / JSON metadata.
|
| 66 |
+
|
| 67 |
+
```json
|
| 68 |
+
{
|
| 69 |
+
"id": "leha_cvqad_000001",
|
| 70 |
+
"reference_id": "src_012",
|
| 71 |
+
"distorted_video": "distorted/codec_xxx/video_000001.mp4",
|
| 72 |
+
"reference_video": "references/src_012.y4m",
|
| 73 |
+
"split": "open",
|
| 74 |
+
"source_type": "raw_or_ugc",
|
| 75 |
+
"content_category": "sports",
|
| 76 |
+
"codec_family": "hevc",
|
| 77 |
+
"codec_name": "x265",
|
| 78 |
+
"preset": "medium",
|
| 79 |
+
"target_bitrate_kbps": 2000,
|
| 80 |
+
"width": 1920,
|
| 81 |
+
"height": 1080,
|
| 82 |
+
"fps": 30,
|
| 83 |
+
"bt_score": 0.73,
|
| 84 |
+
"elo_score": 1462.1,
|
| 85 |
+
"mos": 13.6,
|
| 86 |
+
"dmos": 5.2,
|
| 87 |
+
"fused_score": 0.69
|
| 88 |
+
}
|
| 89 |
+
````
|
| 90 |
+
|
| 91 |
+
### Data Fields
|
| 92 |
+
|
| 93 |
+
Use this section to describe your actual metadata schema. A typical release may contain:
|
| 94 |
+
|
| 95 |
+
* `id`: unique sample identifier
|
| 96 |
+
* `reference_id`: identifier of the source video
|
| 97 |
+
* `distorted_video`: path or filename of the compressed video
|
| 98 |
+
* `reference_video`: path or filename of the reference video for FR evaluation
|
| 99 |
+
* `split`: dataset split, usually `open`
|
| 100 |
+
* `source_type`: whether the source content is pristine / professional or UGC
|
| 101 |
+
* `content_category`: coarse content label, if provided
|
| 102 |
+
* `codec_family`: compression standard family (for example AVC, HEVC, VVC, AV1, VP9)
|
| 103 |
+
* `codec_name`: concrete encoder / codec implementation
|
| 104 |
+
* `preset`: encoding preset
|
| 105 |
+
* `target_bitrate_kbps`: target bitrate used during encoding
|
| 106 |
+
* `width`, `height`, `fps`: technical properties of the sample
|
| 107 |
+
* `bt_score`: Bradley-Terry subjective ranking score
|
| 108 |
+
* `elo_score`: Elo-based subjective ranking score
|
| 109 |
+
* `mos`: mean opinion score
|
| 110 |
+
* `dmos`: differential mean opinion score
|
| 111 |
+
* `fused_score`: unified score derived from pairwise and rating experiments
|
| 112 |
+
|
| 113 |
+
### Data Splits
|
| 114 |
+
|
| 115 |
+
The full LEHA-CVQAD benchmark is divided into:
|
| 116 |
+
|
| 117 |
+
* **Open split:** 1,963 videos
|
| 118 |
+
* **Hidden split:** 4,277 videos
|
| 119 |
+
|
| 120 |
+
The full benchmark contains 6,240 distorted videos in total.
|
| 121 |
+
This Hugging Face repository releases the **open split only**.
|
| 122 |
+
|
| 123 |
+
## Dataset Creation
|
| 124 |
+
|
| 125 |
+
### Curation Rationale
|
| 126 |
+
|
| 127 |
+
The dataset was created to support **generalized video quality assessment of compression artifacts**. Existing public datasets often have one or more of the following limitations:
|
| 128 |
+
|
| 129 |
+
* limited codec diversity,
|
| 130 |
+
* limited content diversity,
|
| 131 |
+
* lack of authentic UGC artifacts,
|
| 132 |
+
* small scale,
|
| 133 |
+
* or subjective labels that are hard to compare across sources.
|
| 134 |
+
|
| 135 |
+
LEHA-CVQAD was designed to address these issues by combining diverse source content, a wider range of codec presets, and a richer subjective annotation protocol.
|
| 136 |
+
|
| 137 |
+
### Source Data
|
| 138 |
+
|
| 139 |
+
#### Initial source video extraction
|
| 140 |
+
|
| 141 |
+
The source-video extraction methodology follows the earlier CVQAD work. In that pipeline, high-bitrate open-source videos were collected from Vimeo, filtered by license and bitrate, converted to YUV 4:2:0, and sampled using spatial/temporal complexity clustering to obtain a representative and diverse set of reference videos.
|
| 142 |
+
|
| 143 |
+
#### LEHA-CVQAD extension
|
| 144 |
+
|
| 145 |
+
LEHA-CVQAD extends this approach by collecting a larger candidate pool of FullHD pristine videos from Vimeo and Xiph, adding more UGC content, and sampling a final set of **59** reference videos through complexity-aware clustering and manual selection for genre diversity.
|
| 146 |
+
|
| 147 |
+
The final source set includes categories such as:
|
| 148 |
+
|
| 149 |
+
* sports,
|
| 150 |
+
* gaming,
|
| 151 |
+
* nature,
|
| 152 |
+
* interviews / television clips,
|
| 153 |
+
* animation,
|
| 154 |
+
* vlogs,
|
| 155 |
+
* advertisements,
|
| 156 |
+
* music videos,
|
| 157 |
+
* water surfaces,
|
| 158 |
+
* face close-ups,
|
| 159 |
+
* and UGC.
|
| 160 |
+
|
| 161 |
+
### Data Collection and Processing
|
| 162 |
+
|
| 163 |
+
Each source video was compressed using a broad range of codecs and presets to cover modern compression artifacts. The full benchmark uses:
|
| 164 |
+
|
| 165 |
+
* **186 codec / preset variants**
|
| 166 |
+
* **3 target bitrates:** 1000, 2000, and 4000 kbps
|
| 167 |
+
* multiple compression standards including AVC, HEVC, VVC, VP9, AV1, and others
|
| 168 |
+
|
| 169 |
+
Not every source video was encoded with every available codec.
|
| 170 |
+
|
| 171 |
+
The public benchmark design separates:
|
| 172 |
+
|
| 173 |
+
* **open-source codec outputs** into the public split,
|
| 174 |
+
* and **proprietary codec outputs** into the hidden split used for blind evaluation.
|
| 175 |
+
|
| 176 |
+
### Annotations
|
| 177 |
+
|
| 178 |
+
LEHA-CVQAD provides two kinds of subjective labels:
|
| 179 |
+
|
| 180 |
+
1. **Pairwise rankings**
|
| 181 |
+
|
| 182 |
+
* Viewers compare two compressed videos derived from the same source and choose the better one or mark them as equivalent.
|
| 183 |
+
* These comparisons are converted into subjective ranking scores using **Bradley-Terry** and **Elo** models.
|
| 184 |
+
|
| 185 |
+
2. **MOS / DMOS labels**
|
| 186 |
+
|
| 187 |
+
* A subset of videos is rated with **Absolute Category Rating (ACR)** on a **21-point scale**.
|
| 188 |
+
* Reference videos are included to support DMOS computation.
|
| 189 |
+
|
| 190 |
+
The final dataset uses a fusion procedure to combine pairwise and rating information into a more globally consistent quality scale.
|
| 191 |
+
|
| 192 |
+
### Subjective Study Protocol
|
| 193 |
+
|
| 194 |
+
All subjective data were collected through a crowdsourcing platform in a browser-based full-screen setting. Videos were shown at FullHD resolution and pre-buffered before playback. Verification questions and participant filtering were used for quality control.
|
| 195 |
+
|
| 196 |
+
For pairwise comparisons:
|
| 197 |
+
|
| 198 |
+
* each task contained 12 videos presented as pairs,
|
| 199 |
+
* two comparisons were verification questions,
|
| 200 |
+
* at least 10 valid responses were collected for each pair.
|
| 201 |
+
|
| 202 |
+
For MOS collection:
|
| 203 |
+
|
| 204 |
+
* participants completed training before rating,
|
| 205 |
+
* quality control filtered inconsistent or low-effort responses.
|
| 206 |
+
|
| 207 |
+
### Statistics
|
| 208 |
+
|
| 209 |
+
For the full LEHA-CVQAD benchmark:
|
| 210 |
+
|
| 211 |
+
* **59** source videos
|
| 212 |
+
* **6,240** distorted videos
|
| 213 |
+
* **1,963** videos in the public open split
|
| 214 |
+
* **4,277** videos in the hidden split
|
| 215 |
+
* **186** codec / preset variants
|
| 216 |
+
* approximately **1,797,310** valid pairwise responses
|
| 217 |
+
* more than **15,000** unique participants in pairwise experiments
|
| 218 |
+
* MOS responses collected from **1,496** participants
|
| 219 |
+
|
| 220 |
+
## Dataset Use
|
| 221 |
+
|
| 222 |
+
### Direct Use
|
| 223 |
+
|
| 224 |
+
The dataset is suitable for:
|
| 225 |
+
|
| 226 |
+
* training NR-VQA models,
|
| 227 |
+
* training FR-VQA models,
|
| 228 |
+
* metric benchmarking,
|
| 229 |
+
* pairwise ranking models,
|
| 230 |
+
* regression models for perceptual quality,
|
| 231 |
+
* and research on codec optimization and rate-distortion alignment.
|
| 232 |
+
|
| 233 |
+
### Out-of-Scope Use
|
| 234 |
+
|
| 235 |
+
The dataset is not intended for:
|
| 236 |
+
|
| 237 |
+
* general video understanding,
|
| 238 |
+
* semantic recognition,
|
| 239 |
+
* action recognition,
|
| 240 |
+
* captioning,
|
| 241 |
+
* or speech / language tasks.
|
| 242 |
+
|
| 243 |
+
It should also not be treated as a universal proxy for all video distortions, since it is specifically oriented toward **compression-related quality assessment**.
|
| 244 |
+
|
| 245 |
+
## Bias, Risks, and Limitations
|
| 246 |
+
|
| 247 |
+
Several limitations should be considered:
|
| 248 |
+
|
| 249 |
+
* The public release is only the **open split**. Results on this split alone may overestimate generalization compared with blind benchmark evaluation.
|
| 250 |
+
* The dataset focuses on **compression artifacts** and is less suitable for unrelated distortions such as camera shake, defocus, or transmission artifacts unless they are already present in the source content.
|
| 251 |
+
* Subjective studies were crowd-based rather than fully controlled laboratory studies.
|
| 252 |
+
* Some content categories, codecs, or bitrate regions may be easier for current metrics than others.
|
| 253 |
+
* Pairwise labels are naturally local to variants of the same source; the fused scale reduces but may not eliminate all cross-content comparability issues.
|
| 254 |
+
|
| 255 |
+
## Data Preprocessing
|
| 256 |
+
|
| 257 |
+
Typical preprocessing for research use may include:
|
| 258 |
+
|
| 259 |
+
* decoding compressed videos to frames,
|
| 260 |
+
* temporal subsampling,
|
| 261 |
+
* patch or clip extraction,
|
| 262 |
+
* normalization of subjective labels,
|
| 263 |
+
* and pairing distorted videos with reference videos for FR evaluation.
|
| 264 |
+
|
| 265 |
+
Researchers should report preprocessing choices clearly for reproducibility.
|
| 266 |
+
|
| 267 |
+
## Evaluation
|
| 268 |
+
|
| 269 |
+
Common evaluation protocols include:
|
| 270 |
+
|
| 271 |
+
* Spearman rank correlation coefficient (SRCC)
|
| 272 |
+
* Pearson linear correlation coefficient (PLCC)
|
| 273 |
+
* Kendall rank correlation coefficient (KRCC)
|
| 274 |
+
* pairwise ranking accuracy
|
| 275 |
+
* codec-wise or bitrate-wise subgroup evaluation
|
| 276 |
+
|
| 277 |
+
When reproducing benchmark results, users should ensure that they do not train on hidden benchmark content.
|
| 278 |
+
|
| 279 |
+
## Citation
|
| 280 |
+
|
| 281 |
+
If you use this dataset, please cite the LEHA-CVQAD paper and the earlier CVQAD methodology paper:
|
| 282 |
+
|
| 283 |
+
```bibtex
|
| 284 |
+
@article{gushchin2025leha,
|
| 285 |
+
title={LEHA-CVQAD: Dataset To Enable Generalized Video Quality Assessment of Compression Artifacts},
|
| 286 |
+
author={Gushchin, Alexander and Smirnov, Maksim and Antsiferova, Anastasia and others},
|
| 287 |
+
journal={arXiv preprint arXiv:2507.03990},
|
| 288 |
+
year={2025}
|
| 289 |
+
}
|
| 290 |
+
```
|
| 291 |
+
|
| 292 |
+
```bibtex
|
| 293 |
+
@inproceedings{antsiferova2022video,
|
| 294 |
+
title={Video compression dataset and benchmark of learning-based video-quality metrics},
|
| 295 |
+
author={Antsiferova, Anastasia and Lavrushkin, Sergey and Smirnov, Maksim and Gushchin, Alexander and Vatolin, Dmitriy and Kulikov, Dmitriy},
|
| 296 |
+
booktitle={NeurIPS 2022 Datasets and Benchmarks Track},
|
| 297 |
+
year={2022}
|
| 298 |
+
}
|
| 299 |
+
```
|
| 300 |
+
|
| 301 |
+
## Additional Information
|
| 302 |
+
|
| 303 |
+
This repository corresponds to the public dataset release.
|
| 304 |
+
For blind evaluation on the hidden split and benchmark results for existing metrics, see the MSU benchmark pages linked above.
|