Srikant86 commited on
Commit
bf01d31
·
1 Parent(s): 6724cd2

readme added

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ extra_gated_prompt: >-
4
+ You agree to not use the dataset to conduct experiments that cause harm to
5
+ human subjects. Please note that the data in this dataset may be subject to
6
+ other agreements. Before using the data, be sure to read the relevant
7
+ agreements carefully to ensure compliant use. Video copyrights belong to the
8
+ original video creators or platforms and are for academic research use only.
9
+ task_categories:
10
+ - visual-question-answering
11
+ extra_gated_fields:
12
+ Name: text
13
+ Company/Organization: text
14
+ Country: text
15
+ E-Mail: text
16
+ modalities:
17
+ - Video
18
+ - Text
19
+ configs:
20
+ - config_name: action_sequence
21
+ data_files: json/action_sequence.json
22
+ - config_name: moving_count
23
+ data_files: json/moving_count.json
24
+ - config_name: action_prediction
25
+ data_files: json/action_prediction.json
26
+ - config_name: episodic_reasoning
27
+ data_files: json/episodic_reasoning.json
28
+ - config_name: action_antonym
29
+ data_files: json/action_antonym.json
30
+ - config_name: action_count
31
+ data_files: json/action_count.json
32
+ - config_name: scene_transition
33
+ data_files: json/scene_transition.json
34
+ - config_name: object_shuffle
35
+ data_files: json/object_shuffle.json
36
+ - config_name: object_existence
37
+ data_files: json/object_existence.json
38
+ - config_name: fine_grained_pose
39
+ data_files: json/fine_grained_pose.json
40
+ - config_name: unexpected_action
41
+ data_files: json/unexpected_action.json
42
+ - config_name: moving_direction
43
+ data_files: json/moving_direction.json
44
+ - config_name: state_change
45
+ data_files: json/state_change.json
46
+ - config_name: object_interaction
47
+ data_files: json/object_interaction.json
48
+ - config_name: character_order
49
+ data_files: json/character_order.json
50
+ - config_name: action_localization
51
+ data_files: json/action_localization.json
52
+ - config_name: counterfactual_inference
53
+ data_files: json/counterfactual_inference.json
54
+ - config_name: fine_grained_action
55
+ data_files: json/fine_grained_action.json
56
+ - config_name: moving_attribute
57
+ data_files: json/moving_attribute.json
58
+ - config_name: egocentric_navigation
59
+ data_files: json/egocentric_navigation.json
60
+ language:
61
+ - en
62
+ size_categories:
63
+ - 1K<n<10K
64
+ ---
65
+ # MVTamperBench Dataset
66
+
67
+ ## Overview
68
+
69
+ **MVTamperBenchEnd** is a robust benchmark designed to evaluate Vision-Language Models (VLMs) against adversarial video tampering effects. It leverages the diverse and well-structured MVBench dataset, systematically augmented with four distinct tampering techniques:
70
+
71
+ 1. **Masking**: Overlays a black rectangle on a 1-second segment, simulating visual data loss.
72
+ 2. **Repetition**: Repeats a 1-second segment, introducing temporal redundancy.
73
+ 3. **Rotation**: Rotates a 1-second segment by 180 degrees, introducing spatial distortion.
74
+ 4. **Substitution**: Replaces a 1-second segment with a random clip from another video, disrupting the temporal and contextual flow.
75
+
76
+ The tampering effects are applied to the middle of each video to ensure consistent evaluation across models.
77
+
78
+ ---
79
+
80
+ ## Dataset Details
81
+
82
+ The MVTamperBench dataset is built upon the **MVBench dataset**, a widely recognized collection used in video-language evaluation. It features a broad spectrum of content to ensure robust model evaluation, including:
83
+
84
+ - **Content Diversity**: Spanning a variety of objects, activities, and settings.
85
+ - **Temporal Dynamics**: Videos with temporal dependencies for coherence testing.
86
+ - **Benchmark Utility**: Recognized datasets enabling comparisons with prior work.
87
+
88
+ ### Incorporated Datasets
89
+
90
+ The MVTamperBench dataset integrates videos from several sources, each contributing unique characteristics:
91
+
92
+ | Dataset Name | Primary Scene Type and Unique Characteristics |
93
+ |----------------------|-------------------------------------------------------------------------|
94
+ | STAR | Indoor actions and object interactions |
95
+ | PAXION | Real-world scenes with nuanced actions |
96
+ | Moments in Time (MiT) V1 | Indoor/outdoor scenes across varied contexts |
97
+ | FunQA | Humor-focused, creative, real-world events |
98
+ | CLEVRER | Simulated scenes for object movement and reasoning |
99
+ | Perception Test | First/third-person views for object tracking |
100
+ | Charades-STA | Indoor human actions and interactions |
101
+ | MoVQA | Diverse scenes for scene transition comprehension |
102
+ | VLN-CE | Indoor navigation from agent perspective |
103
+ | TVQA | TV show scenes for episodic reasoning |
104
+
105
+ ### Dataset Expansion
106
+
107
+ The original MVBench dataset contains 3,699 videos, which have been systematically expanded through tampering effects, resulting in a total of **18,495 videos**. This ensures:
108
+
109
+ - **Diversity**: Varied adversarial challenges for robust evaluation.
110
+ - **Volume**: Sufficient data for training and testing.
111
+
112
+ Below is a visual representation of the tampered video length distribution:
113
+
114
+ ![Tampered Video Length Distribution](./assert/tampered_video_length_distribution.png "Distribution of tampered video lengths")
115
+
116
+ ---
117
+
118
+ ## Benchmark Construction
119
+
120
+ MVTamperBench is built with modularity, scalability, and reproducibility at its core:
121
+
122
+ - **Modularity**: Each tampering effect is implemented as a reusable class, allowing for easy adaptation.
123
+ - **Scalability**: Supports customizable tampering parameters, such as location and duration.
124
+ - **Integration**: Fully compatible with VLMEvalKit, enabling seamless evaluations of tampering robustness alongside general VLM capabilities.
125
+
126
+ By maintaining consistent tampering duration (1 second) and location (center of the video), MVTamperBench ensures fair and comparable evaluations across models.
127
+
128
+ ---
129
+
130
+ ## Download Dataset
131
+
132
+ You can access the MVTamperBench dataset directly from the Hugging Face repository:
133
+
134
+ [Download MVTamperBench Dataset](https://huggingface.co/datasets/Srikant86/MVTamperBench)
135
+
136
+ ---
137
+
138
+ ## How to Use
139
+
140
+ 1. Clone the Hugging Face repository:
141
+ ```bash
142
+ git clone [https://huggingface.co/datasets/mvtamperbenchstart](https://huggingface.co/datasets/Srikant86/MVTamperBenchEnd)
143
+ cd mvtamperbench
144
+ ```
145
+
146
+ 2. Load the dataset using the Hugging Face `datasets` library:
147
+ ```python
148
+ from datasets import load_dataset
149
+
150
+ dataset = load_dataset("mvtamperbench")
151
+ ```
152
+
153
+ 3. Explore the dataset structure and metadata:
154
+ ```python
155
+ print(dataset["train"])
156
+ ```
157
+
158
+ 4. Utilize the dataset for tampering detection tasks, model evaluation, and more.
159
+
160
+ ---
161
+
162
+ ## Citation
163
+
164
+ If you use MVTamperBench in your research, please cite:
165
+
166
+ ```bibtex
167
+ @misc{agarwal2024mvtamperbenchevaluatingrobustnessvisionlanguage,
168
+ title={MVTamperBench: Evaluating Robustness of Vision-Language Models},
169
+ author={Amit Agarwal and Srikant Panda and Angeline Charles and Bhargava Kumar and Hitesh Patel and Priyanranjan Pattnayak and Taki Hasan Rafi and Tejaswini Kumar and Dong-Kyu Chae},
170
+ year={2024},
171
+ eprint={2412.19794},
172
+ archivePrefix={arXiv},
173
+ primaryClass={cs.CV},
174
+ url={https://arxiv.org/abs/2412.19794},
175
+ }
176
+ ```
177
+
178
+ ---
179
+
180
+ ## License
181
+
182
+ MVTamperBench is released under the MIT License. See `LICENSE` for details.