Datasets:
Improve dataset card: Add links and correct task category
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,13 +1,16 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: mit
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
human subjects. Please note that the data in this dataset may be subject to
|
| 6 |
-
other agreements. Before using the data, be sure to read the relevant
|
| 7 |
-
agreements carefully to ensure compliant use. Video copyrights belong to the
|
| 8 |
-
original video creators or platforms and are for academic research use only.
|
| 9 |
task_categories:
|
| 10 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
extra_gated_fields:
|
| 12 |
Name: text
|
| 13 |
Company/Organization: text
|
|
@@ -55,13 +58,12 @@ configs:
|
|
| 55 |
data_files: json/moving_attribute.json
|
| 56 |
- config_name: egocentric_navigation
|
| 57 |
data_files: json/egocentric_navigation.json
|
| 58 |
-
language:
|
| 59 |
-
- en
|
| 60 |
-
size_categories:
|
| 61 |
-
- 1K<n<10K
|
| 62 |
---
|
|
|
|
| 63 |
# MVTamperBench Dataset
|
| 64 |
|
|
|
|
|
|
|
| 65 |
## Overview
|
| 66 |
|
| 67 |
**MVTamperBenchEnd** is a robust benchmark designed to evaluate Vision-Language Models (VLMs) against adversarial video tampering effects. It leverages the diverse and well-structured MVBench dataset, systematically augmented with four distinct tampering techniques:
|
|
@@ -177,4 +179,4 @@ If you use MVTamperBench in your research, please cite:
|
|
| 177 |
|
| 178 |
## License
|
| 179 |
|
| 180 |
-
MVTamperBench is released under the MIT License. See `LICENSE` for details.
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: mit
|
| 5 |
+
size_categories:
|
| 6 |
+
- 1K<n<10K
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
task_categories:
|
| 8 |
+
- video-classification
|
| 9 |
+
extra_gated_prompt: You agree to not use the dataset to conduct experiments that cause
|
| 10 |
+
harm to human subjects. Please note that the data in this dataset may be subject
|
| 11 |
+
to other agreements. Before using the data, be sure to read the relevant agreements
|
| 12 |
+
carefully to ensure compliant use. Video copyrights belong to the original video
|
| 13 |
+
creators or platforms and are for academic research use only.
|
| 14 |
extra_gated_fields:
|
| 15 |
Name: text
|
| 16 |
Company/Organization: text
|
|
|
|
| 58 |
data_files: json/moving_attribute.json
|
| 59 |
- config_name: egocentric_navigation
|
| 60 |
data_files: json/egocentric_navigation.json
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
---
|
| 62 |
+
|
| 63 |
# MVTamperBench Dataset
|
| 64 |
|
| 65 |
+
[Paper](https://arxiv.org/abs/2412.19794) | [Code](https://github.com/Srikant86/MVTamperBench)
|
| 66 |
+
|
| 67 |
## Overview
|
| 68 |
|
| 69 |
**MVTamperBenchEnd** is a robust benchmark designed to evaluate Vision-Language Models (VLMs) against adversarial video tampering effects. It leverages the diverse and well-structured MVBench dataset, systematically augmented with four distinct tampering techniques:
|
|
|
|
| 179 |
|
| 180 |
## License
|
| 181 |
|
| 182 |
+
MVTamperBench is released under the MIT License. See `LICENSE` for details.
|