Add task category
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,13 +1,20 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
property
|
| 9 |
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
extra_gated_fields:
|
| 12 |
Institution: text
|
| 13 |
Name: text
|
|
@@ -17,12 +24,12 @@ extra_gated_fields:
|
|
| 17 |
options:
|
| 18 |
- Research
|
| 19 |
I agree to use this dataset solely for research purposes: checkbox
|
| 20 |
-
I will not use this dataset in any way that infringes upon the rights of the copyright
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
- en
|
| 25 |
---
|
|
|
|
| 26 |
<h1 align="center">MomentSeeker: A Comprehensive Benchmark and A Strong Baseline For Moment Retrieval Within Long Videos</h1>
|
| 27 |
<p align="center">
|
| 28 |
<a href="https://yhy-2000.github.io/MomentSeeker/">
|
|
@@ -95,6 +102,8 @@ If the original authors of the related works still believe that the videos shoul
|
|
| 95 |
The JSON file provides candidate videos for each question. The candidates can be ranked, and metrics such as Recall@1 and MAP@5 can be computed accordingly.
|
| 96 |
|
| 97 |
|
|
|
|
|
|
|
| 98 |
|
| 99 |
## Hosting and Maintenance
|
| 100 |
The annotation files will be permanently retained.
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: cc-by-nc-sa-4.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- video-text-to-text
|
| 7 |
+
extra_gated_prompt: "You acknowledge and understand that: This dataset is provided\
|
| 8 |
+
\ solely for academic research purposes. It is not intended for commercial use or\
|
| 9 |
+
\ any other non-research activities. All copyrights, trademarks, and other intellectual\
|
| 10 |
+
\ property rights related to the videos in the dataset remain the exclusive property\
|
| 11 |
+
\ of their respective owners. \n You assume full responsibility for any additional\
|
| 12 |
+
\ use or dissemination of this dataset and for any consequences that may arise from\
|
| 13 |
+
\ such actions. You are also aware that the copyright holders of the original videos\
|
| 14 |
+
\ retain the right to request the removal of their videos from the dataset. \n Furthermore,\
|
| 15 |
+
\ it is your responsibility to respect these conditions and to use the dataset ethically\
|
| 16 |
+
\ and in compliance with all applicable laws and regulations. Any violation of these\
|
| 17 |
+
\ terms may result in the immediate termination of your access to the dataset."
|
| 18 |
extra_gated_fields:
|
| 19 |
Institution: text
|
| 20 |
Name: text
|
|
|
|
| 24 |
options:
|
| 25 |
- Research
|
| 26 |
I agree to use this dataset solely for research purposes: checkbox
|
| 27 |
+
? I will not use this dataset in any way that infringes upon the rights of the copyright
|
| 28 |
+
holders of the original videos, and strictly prohibit its use for any commercial
|
| 29 |
+
purposes
|
| 30 |
+
: checkbox
|
|
|
|
| 31 |
---
|
| 32 |
+
|
| 33 |
<h1 align="center">MomentSeeker: A Comprehensive Benchmark and A Strong Baseline For Moment Retrieval Within Long Videos</h1>
|
| 34 |
<p align="center">
|
| 35 |
<a href="https://yhy-2000.github.io/MomentSeeker/">
|
|
|
|
| 102 |
The JSON file provides candidate videos for each question. The candidates can be ranked, and metrics such as Recall@1 and MAP@5 can be computed accordingly.
|
| 103 |
|
| 104 |
|
| 105 |
+
We evaluate the video models (LanguageBind and InternVideo2) using an input of uniformly sampled 8 frames, while COVR follows its default setting of 15 frames. For image models, we use the temporally middle frame as the video input. Additionally, we provide the evaluation code as a reference. To reproduce our results, users should follow the respective original repositories to set up a conda environment, download the model weights, and then run our code.
|
| 106 |
+
|
| 107 |
|
| 108 |
## Hosting and Maintenance
|
| 109 |
The annotation files will be permanently retained.
|