Update README.md
Browse files
README.md
CHANGED
|
@@ -33,6 +33,9 @@ File structure:
|
|
| 33 |
|
| 34 |
5) `VideoInfo.json` — meta information about each video (e.g. license)
|
| 35 |
|
|
|
|
|
|
|
|
|
|
| 36 |
## Evaluation
|
| 37 |
|
| 38 |
### Environment Setup
|
|
@@ -48,17 +51,18 @@ Archives with videos were accepted from challenge participants as submissions an
|
|
| 48 |
|
| 49 |
Usage example:
|
| 50 |
|
| 51 |
-
1)
|
| 52 |
-
2)
|
| 53 |
-
3)
|
| 54 |
-
|
| 55 |
-
* `--
|
|
|
|
| 56 |
* `--gt_video_predictions ./SaliencyTest/Test` — folder from dataset page with gt saliency videos
|
| 57 |
* `--gt_extracted_frames ./SaliencyTest-Frames` — folder to store ground-truth frames (should not exist at launch time), requires ~170 GB of free space
|
| 58 |
* `--gt_fixations_path ./FixationsTest/Test` — folder from dataset page with gt saliency fixations
|
| 59 |
* `--split_json ./TrainTestSplit.json` — JSON from dataset page with names splitting
|
| 60 |
* `--results_json ./results.json` — path to the output results json
|
| 61 |
* `--mode public_test` — public_test/private_test subsets
|
| 62 |
-
|
| 63 |
|
| 64 |
[](https://www.cvlai.net/ntire/2026/)
|
|
|
|
| 33 |
|
| 34 |
5) `VideoInfo.json` — meta information about each video (e.g. license)
|
| 35 |
|
| 36 |
+
6) `SampleSubmission.zip` — example Center Prior submission for the challenge, obtained from fitter mean Gaussian over training saliency maps.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
## Evaluation
|
| 40 |
|
| 41 |
### Environment Setup
|
|
|
|
| 51 |
|
| 52 |
Usage example:
|
| 53 |
|
| 54 |
+
1) Check that your predictions match the structure and names of the baseline SampleSubmission.zip submission.
|
| 55 |
+
2) Install `pip install -r requirments.txt`, `conda install ffmpeg`
|
| 56 |
+
3) Download and extract `SaliencyTest.zip`, `FixationsTest.zip`, and `TrainTestSplit.json` files from the dataset page
|
| 57 |
+
4) Run `python bench.py` with flags:
|
| 58 |
+
* `--model_video_predictions ./SampleSubmission` — folder with predicted saliency videos
|
| 59 |
+
* `--model_extracted_frames ./SampleSubmission-Frames` — folder to store prediction frames (should not exist at launch time), requires ~170 GB of free space
|
| 60 |
* `--gt_video_predictions ./SaliencyTest/Test` — folder from dataset page with gt saliency videos
|
| 61 |
* `--gt_extracted_frames ./SaliencyTest-Frames` — folder to store ground-truth frames (should not exist at launch time), requires ~170 GB of free space
|
| 62 |
* `--gt_fixations_path ./FixationsTest/Test` — folder from dataset page with gt saliency fixations
|
| 63 |
* `--split_json ./TrainTestSplit.json` — JSON from dataset page with names splitting
|
| 64 |
* `--results_json ./results.json` — path to the output results json
|
| 65 |
* `--mode public_test` — public_test/private_test subsets
|
| 66 |
+
5) The result you get will be available following `results.json` path
|
| 67 |
|
| 68 |
[](https://www.cvlai.net/ntire/2026/)
|