Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,60 @@ configs:
|
|
| 7 |
- split: test
|
| 8 |
path: "Test/**"
|
| 9 |
license: cc-by-4.0
|
| 10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
- split: test
|
| 8 |
path: "Test/**"
|
| 9 |
license: cc-by-4.0
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
# CrowdSAL: Video Saliency Dataset and Benchmark
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
## Dataset
|
| 17 |
+
[](https://huggingface.co/datasets/ANDRYHA/CrowdSAL)
|
| 18 |
+
[](https://drive.google.com/drive/folders/1daH-14w_vHLc9OuGQ_RU0HgUv_Wc3G0o?usp=sharing)
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
**CrowdSAL** is the largest video saliency dataset with the following key features:
|
| 22 |
+
* Large scale: **5000** videos with mean **18.4s** duration, **2.7M+** frames;
|
| 23 |
+
* Mouse fixations from **>19000** observers (**>75** per video);
|
| 24 |
+
* **Audio** track saved and played to observers;
|
| 25 |
+
* High resolution: all streams are **FullHD**;
|
| 26 |
+
* Diverse content from **YouTube, Shorts, Vimeo**;
|
| 27 |
+
* License: **CC-BY**;
|
| 28 |
+
|
| 29 |
+
### File Structure
|
| 30 |
+
|
| 31 |
+
1) `Train/Test` folders — dataset splits, ids 0001-3000 are from Train, 3001-5000 from Test subset;
|
| 32 |
+
|
| 33 |
+
2) `Videos` — 5000 mp4 FullHD, 30 FPS videos with audio streams;
|
| 34 |
+
|
| 35 |
+
3) `Saliency` — 5000 mp4 almost losslessly (crf 0, 10bit, min-max normalized) compressed continuous saliency maps videos;
|
| 36 |
+
|
| 37 |
+
4) `Fixations` — 5000 json files with per-frame fixation coordinates, from which saliency maps were obtained;
|
| 38 |
+
|
| 39 |
+
5) `metadata.jsonl` — meta information about each video (e.g. license, source URL, etc.)
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## Benchmark Evaluation
|
| 43 |
+
|
| 44 |
+
### Environment Setup
|
| 45 |
+
|
| 46 |
+
```
|
| 47 |
+
conda create -n saliency python=3.10.19
|
| 48 |
+
conda activate saliency
|
| 49 |
+
pip install numpy==2.2.6 opencv-python-headless==4.12.0.88 tqdm==4.67.1
|
| 50 |
+
conda install ffmpeg=4.4.2 -c conda-forge
|
| 51 |
+
```
|
| 52 |
+
### Run Evaluation
|
| 53 |
+
Usage example:
|
| 54 |
+
|
| 55 |
+
1) Check that your predictions match the structure and names of the Test dataset subset;
|
| 56 |
+
2) Install all dependencies from Environment Setup;
|
| 57 |
+
3) Download and extract all CrowdSAL files from the dataset page;
|
| 58 |
+
4) Run `python bench.py` with flags:
|
| 59 |
+
* `--model_video_predictions` — folder with predicted saliency videos
|
| 60 |
+
* `--model_extracted_frames` — folder to store prediction frames (should not exist at launch time)
|
| 61 |
+
* `--gt_video_predictions` — folder from dataset page with gt saliency videos
|
| 62 |
+
* `--gt_extracted_frames` — folder to store ground-truth frames (should not exist at launch time)
|
| 63 |
+
* `--gt_fixations_path` — folder from dataset page with gt saliency fixations
|
| 64 |
+
* `--mode` — Train/Test subsets split
|
| 65 |
+
* `--results_json` — path to the output results json
|
| 66 |
+
5) The result you get will be available following `results_json` path.
|