| | --- |
| | language: |
| | - en |
| | license: mit |
| | size_categories: |
| | - 1K<n<10K |
| | task_categories: |
| | - visual-question-answering |
| | - video-classification |
| | extra_gated_prompt: You agree to not use the dataset to conduct experiments that cause |
| | harm to human subjects. Please note that the data in this dataset may be subject |
| | to other agreements. Before using the data, be sure to read the relevant agreements |
| | carefully to ensure compliant use. Video copyrights belong to the original video |
| | creators or platforms and are for academic research use only. |
| | extra_gated_fields: |
| | Name: text |
| | Company/Organization: text |
| | Country: text |
| | E-Mail: text |
| | modalities: |
| | - Video |
| | - Text |
| | configs: |
| | - config_name: action_sequence |
| | data_files: json/action_sequence.json |
| | - config_name: moving_count |
| | data_files: json/moving_count.json |
| | - config_name: action_prediction |
| | data_files: json/action_prediction.json |
| | - config_name: episodic_reasoning |
| | data_files: json/episodic_reasoning.json |
| | - config_name: action_antonym |
| | data_files: json/action_antonym.json |
| | - config_name: action_count |
| | data_files: json/action_count.json |
| | - config_name: scene_transition |
| | data_files: json/scene_transition.json |
| | - config_name: object_shuffle |
| | data_files: json/object_shuffle.json |
| | - config_name: object_existence |
| | data_files: json/object_existence.json |
| | - config_name: fine_grained_pose |
| | data_files: json/fine_grained_pose.json |
| | - config_name: unexpected_action |
| | data_files: json/unexpected_action.json |
| | - config_name: moving_direction |
| | data_files: json/moving_direction.json |
| | - config_name: state_change |
| | data_files: json/state_change.json |
| | - config_name: object_interaction |
| | data_files: json/object_interaction.json |
| | - config_name: character_order |
| | data_files: json/character_order.json |
| | - config_name: action_localization |
| | data_files: json/action_localization.json |
| | - config_name: counterfactual_inference |
| | data_files: json/counterfactual_inference.json |
| | - config_name: fine_grained_action |
| | data_files: json/fine_grained_action.json |
| | - config_name: moving_attribute |
| | data_files: json/moving_attribute.json |
| | - config_name: egocentric_navigation |
| | data_files: json/egocentric_navigation.json |
| | --- |
| | |
| | # MVTamperBench Dataset |
| |
|
| | ## Overview |
| |
|
| | **MVTamperBench** is a robust benchmark designed to evaluate Vision-Language Models (VLMs) against adversarial video tampering effects. It leverages the diverse and well-structured MVBench dataset, systematically augmented with five distinct tampering techniques: |
| |
|
| | 1. **Frame Dropping**: Removes a 1-second segment, creating temporal discontinuity. |
| | 2. **Masking**: Overlays a black rectangle on a 1-second segment, simulating visual data loss. |
| | 3. **Repetition**: Repeats a 1-second segment, introducing temporal redundancy. |
| | 4. **Rotation**: Rotates a 1-second segment by 180 degrees, introducing spatial distortion. |
| | 5. **Substitution**: Replaces a 1-second segment with a random clip from another video, disrupting the temporal and contextual flow. |
| |
|
| | The tampering effects are applied to the middle of each video to ensure consistent evaluation across models. |
| |
|
| | --- |
| |
|
| | ## Dataset Details |
| |
|
| | The MVTamperBench dataset is built upon the **MVBench dataset**, a widely recognized collection used in video-language evaluation. It features a broad spectrum of content to ensure robust model evaluation, including: |
| |
|
| | - **Content Diversity**: Spanning a variety of objects, activities, and settings. |
| | - **Temporal Dynamics**: Videos with temporal dependencies for coherence testing. |
| | - **Benchmark Utility**: Recognized datasets enabling comparisons with prior work. |
| |
|
| | ### Incorporated Datasets |
| |
|
| | The MVTamperBench dataset integrates videos from several sources, each contributing unique characteristics: |
| |
|
| | | Dataset Name | Primary Scene Type and Unique Characteristics | |
| | |----------------------|-------------------------------------------------------------------------| |
| | | STAR | Indoor actions and object interactions | |
| | | PAXION | Real-world scenes with nuanced actions | |
| | | Moments in Time (MiT) V1 | Indoor/outdoor scenes across varied contexts | |
| | | FunQA | Humor-focused, creative, real-world events | |
| | | CLEVRER | Simulated scenes for object movement and reasoning | |
| | | Perception Test | First/third-person views for object tracking | |
| | | Charades-STA | Indoor human actions and interactions | |
| | | MoVQA | Diverse scenes for scene transition comprehension | |
| | | VLN-CE | Indoor navigation from agent perspective | |
| | | TVQA | TV show scenes for episodic reasoning | |
| |
|
| | ### Dataset Expansion |
| |
|
| | The original MVBench dataset contains 3,487 videos, which have been systematically expanded through tampering effects, resulting in a total of **22,122 videos**. This ensures: |
| |
|
| | - **Diversity**: Varied adversarial challenges for robust evaluation. |
| | - **Volume**: Sufficient data for training and testing. |
| |
|
| | Below is a visual representation of the tampered video length distribution: |
| |
|
| |  |
| |
|
| | --- |
| |
|
| | ## Benchmark Construction |
| |
|
| | MVTamperBench is built with modularity, scalability, and reproducibility at its core: |
| |
|
| | - **Modularity**: Each tampering effect is implemented as a reusable class, allowing for easy adaptation. |
| | - **Scalability**: Supports customizable tampering parameters, such as location and duration. |
| | - **Integration**: Fully compatible with VLMEvalKit, enabling seamless evaluations of tampering robustness alongside general VLM capabilities. |
| |
|
| | By maintaining consistent tampering duration (1 second) and location (center of the video), MVTamperBench ensures fair and comparable evaluations across models. |
| |
|
| | --- |
| |
|
| | ## Download Dataset |
| |
|
| | You can access the MVTamperBench dataset directly from the Hugging Face repository: |
| |
|
| | [Download MVTamperBench Dataset](https://huggingface.co/datasets/Srikant86/MVTamperBench) |
| |
|
| | --- |
| |
|
| | ## How to Use |
| |
|
| | 1. Clone the Hugging Face repository: |
| | ```bash |
| | git clone [https://huggingface.co/datasets/mvtamperbench](https://huggingface.co/datasets/Srikant86/MVTamperBench) |
| | cd mvtamperbench |
| | ``` |
| |
|
| | 2. Load the dataset using the Hugging Face `datasets` library: |
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("mvtamperbench") |
| | ``` |
| |
|
| | 3. Explore the dataset structure and metadata: |
| | ```python |
| | print(dataset["train"]) |
| | ``` |
| |
|
| | 4. Utilize the dataset for tampering detection tasks, model evaluation, and more. |
| |
|
| | --- |
| |
|
| | ## Citation |
| |
|
| | If you use MVTamperBench in your research, please cite: |
| |
|
| | [Paper](https://arxiv.org/abs/2412.19794) |
| | [Code](https://github.com/OpenGVLab/MVTamperBench) |
| |
|
| | ```bibtex |
| | @misc{agarwal2025mvtamperbenchevaluatingrobustnessvisionlanguage, |
| | title={MVTamperBench: Evaluating Robustness of Vision-Language Models}, |
| | author={Amit Agarwal and Srikant Panda and Angeline Charles and Bhargava Kumar and Hitesh Patel and Priyaranjan Pattnayak and Taki Hasan Rafi and Tejaswini Kumar and Dong-Kyu Chae}, |
| | year={2025}, |
| | eprint={2412.19794}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CV}, |
| | url={https://arxiv.org/abs/2412.19794}, |
| | } |
| | ``` |
| |
|
| | --- |
| |
|
| | ## License |
| |
|
| | MVTamperBench is built upon MVBench and therefore operates under the same license as the original MVBench. For more details, please refer to the [MVBench README](https://huggingface.co/datasets/OpenGVLab/MVBench/blob/main/README.md). |