Add comprehensive dataset card for AVROBUSTBENCH

#4
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - audio-classification
4
+ - video-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - audio-visual
9
+ - robustness
10
+ - benchmark
11
+ - corruption
12
+ ---
13
+
14
+ # $\texttt{AVROBUSTBENCH}$: Benchmarking the Robustness of Audio-Visual Recognition Models at Test-Time
15
+
16
+ [Paper](https://huggingface.co/papers/2506.00358) | [Code](https://github.com/sarthaxxxxx/AV-C-Robustness-Benchmark) | [Demo](https://www.youtube.com/watch?v=hYdcRO3BuIY&ab_channel=SarthakMaharana)
17
+
18
+ <div align="center">
19
+ <img src="https://github.com/sarthaxxxxx/AV-C-Robustness-Benchmark/blob/main/assets/av_robustness_vgg_samples-1.png?raw=true" alt="Samples from VGGSound-2C" width="100%">
20
+ </div>
21
+
22
+ ## Abstract
23
+
24
+ While recent audio-visual models have demonstrated impressive performance, their robustness to distributional shifts at test-time remains not fully understood. Existing robustness benchmarks mainly focus on single modalities, making them insufficient for thoroughly assessing the robustness of audio-visual models. Motivated by real-world scenarios where shifts can occur $\textit{simultaneously}$ in both audio and visual modalities, we introduce $\texttt{AVROBUSTBENCH}$, a comprehensive benchmark designed to evaluate the test-time robustness of audio-visual recognition models. $\texttt{AVROBUSTBENCH}$ comprises four audio-visual benchmark datasets, $\texttt{AUDIOSET-2C}$, $\texttt{VGGSOUND-2C}$, $\texttt{KINETICS-2C}$, and $\texttt{EPICKITCHENS-2C}$, each incorporating 75 bimodal audio-visual corruptions that are $\textit{co-occurring}$ and $\textit{correlated}$. Through extensive evaluations, we observe that state-of-the-art supervised and self-supervised audio-visual models exhibit declining robustness as corruption severity increases. Furthermore, online test-time adaptation (TTA) methods, on $\texttt{VGGSOUND-2C}$ and $\texttt{KINETICS-2C}$, offer minimal improvements in performance under bimodal corruptions. We further propose $\texttt{AV2C}$, a simple TTA approach enabling on-the-fly cross-modal fusion by penalizing high-entropy samples, which achieves improvements on $\texttt{VGGSOUND-2C}$. We hope that $\texttt{AVROBUSTBENCH}$ will steer the development of more effective and robust audio-visual TTA approaches.
25
+
26
+ ## Datasets
27
+
28
+ We release the code and datasets comprising $\texttt{AVROBUSTBENCH}$. We propose four audio-visual datasets: $\texttt{AUDIOSET-2C}$, $\texttt{VGGSOUND-2C}$, $\texttt{KINETICS-2C}$, and $\texttt{EPICKITCHENS-2C}$. These datasets span diverse domains, environments, and action categories, offering a broad and realistic evaluation suite for audio-visual recognition models.
29
+
30
+ We construct our datasets by introducing our proposed corruptions to the test sets of AudioSet, VGGSound, Kinetics-Sounds, and Epic-Kitchens.
31
+ * $\\texttt{AUDIOSET-2C}$ contains 16,742 audio-video test pairs. Each clip is roughly 10s and spans 527 classes.
32
+ * $\\texttt{VGGSOUND-2C}$ contains 14,046 test pairs.
33
+ * $\\texttt{KINETICS-2C}$ contains 3,111 clips across 32 classes, each around 10s long.
34
+ * $\\texttt{EPICKITCHENS-2C}$ has 205 egocentric video clips capturing daily kitchen tasks of an average duration of 7.4 mins each.
35
+
36
+ ## Sample Usage
37
+
38
+ The `dataset.py` file in the accompanying code repository (`https://github.com/sarthaxxxxx/AV-C-Robustness-Benchmark`) contains the logic for applying any of the released corruptions at any severity level to audio-visual datasets.
39
+
40
+ ### Set up the environment
41
+ This repo requires Python 3.10>. Create an environment and run `pip install -r requirements.txt` before continuing.
42
+
43
+ ### Extract frames and audio from the videos
44
+ To extract the frames and audio from the videos on the dataset, please refer to this [repo](https://github.com/YuanGongND/cav-mae/tree/master/src/preprocess) for instructions of how to extract them. After following those instructions, you should have directories that contain image frames and audio files for videos.
45
+
46
+ ### Setting up the json file
47
+ To begin, you will be required to create a json file for your dataset containing its wav path, labels, video ID, and video path. For example, `BkjpjAohg-0` is the video ID for a file `BkjpjAohg-0.mp4`. The video path is the directory containing the frames for the video ID. As an example, using AudioSet:
48
+
49
+ ```json
50
+ {
51
+ "data": [
52
+ {
53
+ "wav": "/home/adrian/Data/AudioSet/eval_audio/BkjpjAohg-0.wav",
54
+ "labels": "/m/04rlf,/m/07pjwq1,/m/07s72n,/m/08cyft",
55
+ "video_id": "BkjpjAohg-0",
56
+ "video_path": "/home/adrian/Data/AudioSet/eval_frames"
57
+ },
58
+ {
59
+ "wav": "/home/adrian/Data/AudioSet/eval_audio/4ufZrEAJnJI.wav",
60
+ "labels": "/m/0242l",
61
+ "video_id": "4ufZrEAJnJI",
62
+ "video_path": "/home/adrian/Data/AudioSet/eval_frames"
63
+ }]
64
+ }
65
+ ```
66
+
67
+ The dataset json file need not follow this structure. Modify it for your needs. If you modify the json file, you will need to modify `dataset.py`. Note that the dataset file does not contain any logic for labels, as different datasets have different label structures. A user can easily modify our code to add their relevant metadata, labels, and any other corruptions. Our dataset makes it easy to get an image/audio pair from their dataset and add a corruption to both modalities.
68
+
69
+ Create a json path and pass this into the dataset class. We provide a `create_json.py` as a reference of how to create the json file. `/assets` contains a video, frames, audio, label metadata, and json.
70
+
71
+ ### Using the dataset
72
+ Below is the code to use the dataset. The possible corruptions are `gaussian`, `impulse`, `shot`, `speckle`, `compression`, `snow`, `frost`, `spatter`, `wind`, `rain`, `underwater`, `concert`, `smoke`, `crowd`, and `interference`. The `severity` is an integer between 1 and 5.
73
+
74
+ ```python
75
+ from dataset import AVRobustBench
76
+ file_path = 'path/to/your/json'
77
+ dataset = AVRobustBench(file_path, corruption='gaussian', severity=5, frame_num=4, all_frames=False)
78
+ ```
79
+
80
+ Each entry in `AVRobustBench` contains an `(frames, audio)` tuple with the option of adding corruptions on them. `frames` is a list of PIL images if `all_frames=true` or a list with a singular image, while `audio` is a BytesIO .wav file-like. PIL images are the standard, but there is no standard for audio files. Some codebases use `torchaudio`, `librosa`, `soundfile`, or something else. Due to `audio` being a .wav file-like stored in memory, it can be passed as a .wav to any audio library.
81
+
82
+ ### Creating a corrupted video
83
+ Additionally, we provide a static function in `AVRobustBench` that allows a user to input an mp4 video file path, add the visual and audio corruptions to the video, then allow for displaying or saving this video to preprocess if required. The video is stored in memory as a BytesIO but can be saved in disk with the parameter `save_path`. Below is the code to create corrupted videos.
84
+
85
+ ```python
86
+ from dataset import AVRobustBench
87
+ video_path = 'path/to/your/video'
88
+ corrupted_path = 'path/to/your/corrupted/video'
89
+ corrupted_video = AVRobustBench.create_video(video_path, corruption="spatter", severity=5, save_path=corrupted_path)
90
+ ```
91
+
92
+ `demo.ipynb` showcases a few examples of using the dataset.