File size: 6,411 Bytes
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
 
 
 
 
 
 
 
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
 
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
 
 
 
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
 
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
be4abc3
6777f71
 
 
be4abc3
 
6777f71
 
be4abc3
6777f71
 
be4abc3
6777f71
be4abc3
 
 
6777f71
 
be4abc3
 
 
 
 
6777f71
be4abc3
 
 
 
cc14a2f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be4abc3
 
6777f71
be4abc3
6777f71
be4abc3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141

# MSU Compression Dataset Description

<br />

We developed LEHA-CVQAD dataset to evaluate full-reference and no-reference video quality metrics. Here we share the open part of the whole compression artifacts dataset (1,962 out of 6,240 videos). The hidden part is only available to benchmark-support personnel for testing metric performance. All videos are of *mostly* FullHD resolution, YUV420, and 10-15 seconds duration. Fps values are 24, 25, 30, 39, 50, and 60.

Subjective quality scores are also provided in csv file. The higher the score is the better is the quality. To study more about the subjective quality evaluation procedure of our benchmark, you can visit the FAQ section at [Subjectify.us](https://www.subjectify.us). 

Also, a more detailed description of the dataset and benchmark methodology can be found at the paper TODO.

Leaderboard of more than 100 metrics on LEHA-CVQAD dataset: [MSU Video Quality Metrics Benchmark page](https://videoprocessing.ai/benchmarks/video-quality-metrics.html).

<br />

## Dataset Folder Structure

<br />

* **Subjective_scores_and_videos_info.csv** contains subjective scores (MOS, Bradley-Terry, ELO) for each compressed video. Each distorted video beside its subjective quality has the following characteristics:
	* *name of the original (pristine) video*
	* *codec used for encoding*
	* *codec standard (avc, hevc, vvc, av1, ...)*
	* *target bitrate or crf*
	* *bitrate range (high, mid, low)*
	* *original video resolution*
	* *original video fps*

<br />

* **Metrics_scores.csv** contains 100+ VQA metrics values on our dataset and can be used to calculate VQA metrics correlations with subjective scores

* **Compressed_and_GT_videos** contains 59 folders, each of which include 1 *reference videos* (GT), which is required to test full-reference metrics, and many *distorted videos* (compressed), grouped by encoding preset:

	* ``Each distorted video has the following pattern: {video name}/{encoding preset}/{codec name}_{crf or bitrate}.mp4``

	* ``Each reference video has the following pattern: {video name}/GT.mp4``

<br />

## Correlation Calculation for MOS

The following pipeline should be applied only to calculate correlation between metrics scores and **MOS** subjective scores. 

Just apply single correlation coefficient to the **whole list** of MOS subjective scores and metrics scores.


## Correlation Calculation for BT and ELO

<br />

The following pipeline should be applied only to calculate correlation between metrics scores and **BT and ELO** subjective scores. 

There are 59 different original (pristine) videos, as well as several encoding presets in the dataset. **Please pay attention: It is required to calculate the correlation coefficient (SRCC, KRCC, ...) on all of them SEPARATELY**. Therefore, to get a single correlation for the whole dataset, you should use Fisher Z-transform to average group correlations weighted proportionally to group size as follows:

<br />

1) Iterate through 59 original videos and for each calculate correlation coefficients, as many times as the quantity of unique presets for the current video (i.e. for basketball-2021 with 2 presets *fast* and *offline* you should obtain 2 correlations)

<br />

2) Use the inverse hyperbolic tangent (artanh) on each value of the correlation coefficient
	* Replace possible infinity with artanh(0.99)  
	
<br />

3) Apply weighted arithmetic mean to obtained values. For example, if $SROCC_1$ is the spearman correlation counted for the group of samples of size $Size_1$, $SROCC_2$ is the spearman correlation counted for the group of samples of size $Size_2$, then the final correlation have to be counted as $\frac{SROCC_1 * SIZE_1 + SROCC_2 * SIZE_2}{SIZE_1 + SIZE_2}$.

<br />

4) Calculate the hyperbolic tangent (tanh) of the weighted mean
	* Take the absolute value of it and replace 0.99 with 1

<br />

5) The obtained value represents the correlation between your method scores and the subjective scores on our dataset.

<br />

Script to calculate metrics correlations with subjective scores (BT and ELO) is provided in the GitHub repo: https://github.com/msu-video-group/MSU_VQM_Compression_Benchmark



---
---


## Encoding and Decoding


<br />

* To encode videos we used the following command:
```
	ffmpeg −f rawvideo −vcodec rawvideo −s {width}x{height} −r {FPS} −pix_fmt yuv420p −i {video name}.yuv −c:v libx265 −x265−params "lossless =1:qp=0" −vsync 0 {video name}.mp4
```

* To decode the video back to YUV you can use:
```
	ffmpeg -i {video name}.mp4 -pix_fmt yuv420p -vcodec rawvideo -f rawvideo {video name}.yuv
```
* To convert the encoded video to the set of PNG images you can use:
```
	ffmpeg -i {video name}.mp4 {frames dir}/frame_%05d.png
```
<br />

## Citation
Please cite the corresponding papers when using this dataset:
```bibtex
@inproceedings{gushchin2025leha,
  author    = {Gushchin, Aleksandr and Smirnov, Maksim and Vatolin, Dmitriy S. and Antsiferova, Anastasia},
  title     = {LEHA-CVQAD: Dataset To Enable Generalized Video Quality Assessment of Compression Artifacts},
  booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
  year      = {2025},
  pages     = {13405--13412},
  publisher = {Association for Computing Machinery},
  address   = {New York, NY, USA},
  doi       = {10.1145/3746027.3758303},
  url       = {https://doi.org/10.1145/3746027.3758303}
}

@inproceedings{antsiferova2022video,
  author    = {Antsiferova, Anastasia and Lavrushkin, Sergey and Smirnov, Maksim and Gushchin, Aleksandr and Vatolin, Dmitriy and Kulikov, Dmitriy},
  title     = {Video compression dataset and benchmark of learning-based video-quality metrics},
  booktitle = {Advances in Neural Information Processing Systems},
  volume    = {35},
  pages     = {13814--13825},
  year      = {2022},
  publisher = {Curran Associates, Inc.},
  url       = {https://proceedings.neurips.cc/paper_files/paper/2022/file/59ac9f01ea2f701310f3d42037546e4a-Paper-Datasets_and_Benchmarks.pdf}
}
```


## Support and maintaining

<br />

The CMC MSU Graphics and Media Lab hosts the dataset. The team that works with codecs and video quality assessment methods maintains it. Also, the authors of this paper support the video quality metrics benchmark. If you have any question regarding the usage of LEHA-CVQAD, please feel free to contact us via vqa@videoprocessing.ai