CVQAD / README.md
deepfakesMSU's picture
add citation
cc14a2f verified

MSU Compression Dataset Description


We developed LEHA-CVQAD dataset to evaluate full-reference and no-reference video quality metrics. Here we share the open part of the whole compression artifacts dataset (1,962 out of 6,240 videos). The hidden part is only available to benchmark-support personnel for testing metric performance. All videos are of mostly FullHD resolution, YUV420, and 10-15 seconds duration. Fps values are 24, 25, 30, 39, 50, and 60.

Subjective quality scores are also provided in csv file. The higher the score is the better is the quality. To study more about the subjective quality evaluation procedure of our benchmark, you can visit the FAQ section at Subjectify.us.

Also, a more detailed description of the dataset and benchmark methodology can be found at the paper TODO.

Leaderboard of more than 100 metrics on LEHA-CVQAD dataset: MSU Video Quality Metrics Benchmark page.


Dataset Folder Structure


  • Subjective_scores_and_videos_info.csv contains subjective scores (MOS, Bradley-Terry, ELO) for each compressed video. Each distorted video beside its subjective quality has the following characteristics:
    • name of the original (pristine) video
    • codec used for encoding
    • codec standard (avc, hevc, vvc, av1, ...)
    • target bitrate or crf
    • bitrate range (high, mid, low)
    • original video resolution
    • original video fps

  • Metrics_scores.csv contains 100+ VQA metrics values on our dataset and can be used to calculate VQA metrics correlations with subjective scores

  • Compressed_and_GT_videos contains 59 folders, each of which include 1 reference videos (GT), which is required to test full-reference metrics, and many distorted videos (compressed), grouped by encoding preset:

    • Each distorted video has the following pattern: {video name}/{encoding preset}/{codec name}_{crf or bitrate}.mp4

    • Each reference video has the following pattern: {video name}/GT.mp4


Correlation Calculation for MOS

The following pipeline should be applied only to calculate correlation between metrics scores and MOS subjective scores.

Just apply single correlation coefficient to the whole list of MOS subjective scores and metrics scores.

Correlation Calculation for BT and ELO


The following pipeline should be applied only to calculate correlation between metrics scores and BT and ELO subjective scores.

There are 59 different original (pristine) videos, as well as several encoding presets in the dataset. Please pay attention: It is required to calculate the correlation coefficient (SRCC, KRCC, ...) on all of them SEPARATELY. Therefore, to get a single correlation for the whole dataset, you should use Fisher Z-transform to average group correlations weighted proportionally to group size as follows:


  1. Iterate through 59 original videos and for each calculate correlation coefficients, as many times as the quantity of unique presets for the current video (i.e. for basketball-2021 with 2 presets fast and offline you should obtain 2 correlations)

  1. Use the inverse hyperbolic tangent (artanh) on each value of the correlation coefficient
    • Replace possible infinity with artanh(0.99)

  1. Apply weighted arithmetic mean to obtained values. For example, if $SROCC_1$ is the spearman correlation counted for the group of samples of size $Size_1$, $SROCC_2$ is the spearman correlation counted for the group of samples of size $Size_2$, then the final correlation have to be counted as $\frac{SROCC_1 * SIZE_1 + SROCC_2 * SIZE_2}{SIZE_1 + SIZE_2}$.

  1. Calculate the hyperbolic tangent (tanh) of the weighted mean
    • Take the absolute value of it and replace 0.99 with 1

  1. The obtained value represents the correlation between your method scores and the subjective scores on our dataset.

Script to calculate metrics correlations with subjective scores (BT and ELO) is provided in the GitHub repo: https://github.com/msu-video-group/MSU_VQM_Compression_Benchmark



Encoding and Decoding


  • To encode videos we used the following command:
    ffmpeg −f rawvideo −vcodec rawvideo −s {width}x{height} −r {FPS} −pix_fmt yuv420p −i {video name}.yuv −c:v libx265 −x265−params "lossless =1:qp=0" −vsync 0 {video name}.mp4
  • To decode the video back to YUV you can use:
    ffmpeg -i {video name}.mp4 -pix_fmt yuv420p -vcodec rawvideo -f rawvideo {video name}.yuv
  • To convert the encoded video to the set of PNG images you can use:
    ffmpeg -i {video name}.mp4 {frames dir}/frame_%05d.png

Citation

Please cite the corresponding papers when using this dataset:

@inproceedings{gushchin2025leha,
  author    = {Gushchin, Aleksandr and Smirnov, Maksim and Vatolin, Dmitriy S. and Antsiferova, Anastasia},
  title     = {LEHA-CVQAD: Dataset To Enable Generalized Video Quality Assessment of Compression Artifacts},
  booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
  year      = {2025},
  pages     = {13405--13412},
  publisher = {Association for Computing Machinery},
  address   = {New York, NY, USA},
  doi       = {10.1145/3746027.3758303},
  url       = {https://doi.org/10.1145/3746027.3758303}
}

@inproceedings{antsiferova2022video,
  author    = {Antsiferova, Anastasia and Lavrushkin, Sergey and Smirnov, Maksim and Gushchin, Aleksandr and Vatolin, Dmitriy and Kulikov, Dmitriy},
  title     = {Video compression dataset and benchmark of learning-based video-quality metrics},
  booktitle = {Advances in Neural Information Processing Systems},
  volume    = {35},
  pages     = {13814--13825},
  year      = {2022},
  publisher = {Curran Associates, Inc.},
  url       = {https://proceedings.neurips.cc/paper_files/paper/2022/file/59ac9f01ea2f701310f3d42037546e4a-Paper-Datasets_and_Benchmarks.pdf}
}

Support and maintaining


The CMC MSU Graphics and Media Lab hosts the dataset. The team that works with codecs and video quality assessment methods maintains it. Also, the authors of this paper support the video quality metrics benchmark. If you have any question regarding the usage of LEHA-CVQAD, please feel free to contact us via vqa@videoprocessing.ai