Add video-classification task category, link to paper and code

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +14 -10
README.md CHANGED
@@ -1,13 +1,17 @@
1
  ---
 
 
2
  license: mit
3
- extra_gated_prompt: >-
4
- You agree to not use the dataset to conduct experiments that cause harm to
5
- human subjects. Please note that the data in this dataset may be subject to
6
- other agreements. Before using the data, be sure to read the relevant
7
- agreements carefully to ensure compliant use. Video copyrights belong to the
8
- original video creators or platforms and are for academic research use only.
9
  task_categories:
10
  - visual-question-answering
 
 
 
 
 
 
11
  extra_gated_fields:
12
  Name: text
13
  Company/Organization: text
@@ -57,11 +61,8 @@ configs:
57
  data_files: json/moving_attribute.json
58
  - config_name: egocentric_navigation
59
  data_files: json/egocentric_navigation.json
60
- language:
61
- - en
62
- size_categories:
63
- - 1K<n<10K
64
  ---
 
65
  # MVTamperBench Dataset
66
 
67
  ## Overview
@@ -164,6 +165,9 @@ You can access the MVTamperBench dataset directly from the Hugging Face reposito
164
 
165
  If you use MVTamperBench in your research, please cite:
166
 
 
 
 
167
  ```bibtex
168
  @misc{agarwal2025mvtamperbenchevaluatingrobustnessvisionlanguage,
169
  title={MVTamperBench: Evaluating Robustness of Vision-Language Models},
 
1
  ---
2
+ language:
3
+ - en
4
  license: mit
5
+ size_categories:
6
+ - 1K<n<10K
 
 
 
 
7
  task_categories:
8
  - visual-question-answering
9
+ - video-classification
10
+ extra_gated_prompt: You agree to not use the dataset to conduct experiments that cause
11
+ harm to human subjects. Please note that the data in this dataset may be subject
12
+ to other agreements. Before using the data, be sure to read the relevant agreements
13
+ carefully to ensure compliant use. Video copyrights belong to the original video
14
+ creators or platforms and are for academic research use only.
15
  extra_gated_fields:
16
  Name: text
17
  Company/Organization: text
 
61
  data_files: json/moving_attribute.json
62
  - config_name: egocentric_navigation
63
  data_files: json/egocentric_navigation.json
 
 
 
 
64
  ---
65
+
66
  # MVTamperBench Dataset
67
 
68
  ## Overview
 
165
 
166
  If you use MVTamperBench in your research, please cite:
167
 
168
+ [Paper](https://arxiv.org/abs/2412.19794)
169
+ [Code](https://github.com/OpenGVLab/MVTamperBench)
170
+
171
  ```bibtex
172
  @misc{agarwal2025mvtamperbenchevaluatingrobustnessvisionlanguage,
173
  title={MVTamperBench: Evaluating Robustness of Vision-Language Models},