File size: 2,960 Bytes
52a0a46
 
 
 
 
3a2729b
52a0a46
3a2729b
52a0a46
3a2729b
 
 
 
 
 
 
 
7f74017
 
 
 
 
9aa5492
7f74017
 
aee239a
7806869
7f74017
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aee239a
 
7f74017
 
 
94c2b81
 
7f74017
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3a2729b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
configs:
- config_name: default
  data_files:
  - split: train
    path: Train/**
  - split: test
    path: Test/**
license: cc-by-4.0
tags:
- video
- saliency
- human
- crowdsourcing
pretty_name: CrowdSAL
size_categories:
- 1K<n<10K
---


# CrowdSAL: Video Saliency Dataset and Benchmark

![image](https://cdn-uploads.huggingface.co/production/uploads/623a3b20fa4890c51b04cba7/KX-knFqpLI4ngRUozQnIx.png)

## Dataset
[![Dataset Page](https://img.shields.io/badge/Dataset-Page-brightgreen?style=for-the-badge)](https://videoprocessing.ai/datasets/crowdsal.html)
[![Google Drive](https://img.shields.io/badge/Google%20Drive%20Copy-4285F4?style=for-the-badge&logo=googledrive&logoColor=white)](https://drive.google.com/drive/folders/1daH-14w_vHLc9OuGQ_RU0HgUv_Wc3G0o?usp=sharing)

**CrowdSAL** is the largest video saliency dataset with the following key features:
* Large scale: **5000** videos with mean **18.4s** duration, **2.7M+** frames;
* Mouse fixations from **>19000** observers (**>75** per video);
* **Audio** track saved and played to observers;
* High resolution: all streams are **FullHD**;
* Diverse content from **YouTube, Shorts, Vimeo**;
* License: **CC-BY**;

### File Structure

1) `Train/Test` folders — dataset splits, ids 0001-3000 are from Train, 3001-5000 from Test subset;
  
2) `Videos` — 5000 mp4 FullHD, 30 FPS videos with audio streams;

3) `Saliency` — 5000 mp4 almost losslessly (crf 0, 10bit, min-max normalized) compressed continuous saliency maps videos;

4) `Fixations` — 5000 json files with per-frame fixation coordinates, from which saliency maps were obtained;

5) `metadata.jsonl` — meta information about each video (e.g. license, source URL, etc.);


## Benchmark Evaluation

[![GitHub Code](https://img.shields.io/badge/Github%20Code-blue?style=for-the-badge&logo=github)](https://github.com/msu-video-group/CrowdSAL)

### Environment Setup

```
conda create -n saliency python=3.10.19
conda activate saliency
pip install numpy==2.2.6 opencv-python-headless==4.12.0.88 tqdm==4.67.1
conda install ffmpeg=4.4.2 -c conda-forge
```
### Run Evaluation
Usage example:

1) Check that your predictions match the structure and names of the Test dataset subset;
2) Install all dependencies from Environment Setup;
3) Download and extract all CrowdSAL files from the dataset page;
4) Run `python bench.py` with flags:
* `--model_video_predictions` — folder with predicted saliency videos
* `--model_extracted_frames` — folder to store prediction frames (should not exist at launch time)
* `--gt_video_predictions` — folder from dataset page with gt saliency videos
* `--gt_extracted_frames` — folder to store ground-truth frames (should not exist at launch time)
* `--gt_fixations_path` — folder from dataset page with gt saliency fixations
* `--mode` — Train/Test subsets split
* `--results_json` — path to the output results json
5) The result you get will be available following `results_json` path.