Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found dave.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found dave.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for DAVE 👨🏿‍🔬: Diagnostic benchmark for Audio Visual Evaluation

DAVE is a diagnostic benchmark for evaluating audio-visual models, ensuring both modalities are required and providing fine-grained error analysis to reveal specific failures. Researchers can use DAVE to test and compare audio-visual models, refine multi-modal architectures, or develop new methods for audio-visual alignment. It is not intended for training large-scale models but for targeted evaluation and analysis. Make sure you have datasets==3.6.0 installed, since this dataset relies on a custom dataset script:

pip install datasets==3.6.0

Then, you can load and use the dataset as:

from datasets import load_dataset
import random

ego4d_dataset = load_dataset("gorjanradevski/dave", split="ego4d", keep_in_memory=True, trust_remote_code=True)
# or
epic_dataset = load_dataset("gorjanradevski/dave", split="epic", keep_in_memory=True, trust_remote_code=True)

# Perform inference with an audio-visual model for audio-visual alignment task
sample = random.choice(epic_dataset)

# Obtain the audio/sound that is overlayed on the video
sound_effect = sample["audio_class"]

# Get the video path where the specific event is overlayed with an audio
video_path = sample["video_with_overlayed_audio_path"]

# For audio-visual alignment task: find which action matches the overlayed audio
options = sample["choice_metadata"]["audio_visual_alignment"]["choices"]
ground_truth_index = sample["choice_metadata"]["audio_visual_alignment"]["ground_truth"]
ground_truth = options[ground_truth_index]

# Construct the prompt
prompt = f"""What is the person in the video doing when {sound_effect} is heard? Answer using one of the following options:

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}
(E) {options[4]}

Answer only with the letter corresponding to the choice."""

# Load the video and perform inference with any model that can process audio and video input
# or, if you want to visually see the video and the prompt that could be provided to the model:
# print(prompt)
# display(Video(sample["video_with_overlayed_audio_path"], embed=True))

Working with All Evaluation Tasks

DAVE provides 7 different evaluation tasks to diagnose model capabilities across different modalities. Here's how to access all tasks:

from datasets import load_dataset
import random

epic_dataset = load_dataset("gorjanradevski/dave", split="epic", keep_in_memory=True, trust_remote_code=True)
sample = random.choice(epic_dataset)

# Common information across all tasks
sound_effect = sample["audio_class"]
video_with_audio = sample["video_with_overlayed_audio_path"]
silent_video = sample["silent_video_path"]
audio_path = sample["overlayed_audio_path"]
choice_metadata = sample["choice_metadata"]

# ====== Task 1: Audio-Visual Alignment ======
# Find which action matches when the sound is heard
task = "audio_visual_alignment"
options = choice_metadata[task]["choices"]
ground_truth_idx = choice_metadata[task]["ground_truth"]

prompt = f"""What is the person in the video doing when {sound_effect} is heard?

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}
(E) {options[4]}

Answer only with the letter."""

# ====== Task 2: Visual Only ======
# Identify the action during the overlayed event using only visual information
task = "visual_only"
options = choice_metadata[task]["choices"]
ground_truth_idx = choice_metadata[task]["ground_truth"]

prompt = f"""What action is happening during the highlighted segment?

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}
(E) {options[4]}

Answer only with the letter."""
# Use silent_video for this task

# ====== Task 3: Audio Only ======
# Identify the action using only audio information
task = "audio_only"
options = choice_metadata[task]["choices"]
ground_truth_idx = choice_metadata[task]["ground_truth"]

prompt = f"""What action is associated with the sound you hear?

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}
(E) {options[4]}

Answer only with the letter."""
# Use audio_path for this task

# ====== Task 4: Text Only ======
# Identify the action using only textual descriptions
task = "text_only"
options = choice_metadata[task]["choices"]
ground_truth_idx = choice_metadata[task]["ground_truth"]

# Get all event descriptions for context
event_descriptions = [event["narration"] for event in sample["events"]]

prompt = f"""Given these event descriptions: {', '.join(event_descriptions)}
Which action is most likely associated with the sound '{sound_effect}'?

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}
(E) {options[4]}

Answer only with the letter."""

# ====== Task 5: Temporal Ordering ======
# Order the events chronologically
task = "temporal_ordering"
options = choice_metadata[task]["choices"]
ground_truth_order = choice_metadata[task]["ground_truth"]  # e.g., ['(D)', '(A)', '(B)', '(C)']

prompt = f"""Order these events chronologically:

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}

Provide the correct temporal order as a list like ['(A)', '(B)', '(C)', '(D)']."""

# ====== Task 6: Action Recognition ======
# Identify which action occurs during the overlayed segment
task = "action_recognition"
options = choice_metadata[task]["choices"]
ground_truth_indices = choice_metadata[task]["ground_truth"]  # Single-element list, e.g., [3]
ground_truth_idx = ground_truth_indices[0]

prompt = f"""Which action occurs during the overlayed audio segment?

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}

Answer only with the letter."""

# ====== Task 7: Audio Classification ======
# Identify the overlayed sound
task = "audio_classification"
options = choice_metadata[task]["choices"]
ground_truth_idx = choice_metadata[task]["ground_truth"]

prompt = f"""What sound is overlayed on the video?

(A) {options[0]}
(B) {options[1]}
(C) {options[2]}
(D) {options[3]}

Answer only with the letter."""

# ====== Iterate through all tasks programmatically ======
for task_name, task_data in choice_metadata.items():
    print(f"\nTask: {task_name}")
    print(f"Choices: {task_data['choices']}")
    print(f"Ground Truth: {task_data['ground_truth']}")

    # Handle different ground truth formats
    if task_name == "temporal_ordering":
        print(f"Correct order: {task_data['ground_truth']}")
    elif task_name == "action_recognition":
        answer_idx = task_data['ground_truth'][0]
        print(f"Answer: {task_data['choices'][answer_idx]}")
    else:
        # Single choice tasks (audio_visual_alignment, visual_only, audio_only, text_only, audio_classification)
        answer = task_data['choices'][task_data['ground_truth']]
        print(f"Answer: {answer}")

Dataset Details

Dataset Description

DAVE (Diagnostic Audio-Visual Evaluation) is a benchmark dataset designed to systematically evaluate audio-visual models by addressing key limitations in existing datasets. Unlike prior benchmarks that often allow correct predictions using visual data alone, DAVE ensures that both audio and visual modalities are necessary for successful inference. It also provides fine-grained evaluation categories, allowing researchers to diagnose whether model errors stem from visual perception, audio interpretation, or audio-visual misalignment. DAVE is built to uncover specific issues in multimodal models and promote more targeted and robust improvements in audio-visual understanding.

Overview of DAVE

  • Curated by: Gorjan Radevski and Teodora Popordanoska
  • Language(s) (NLP): English
  • License: MIT

Dataset Sources

Uses

The DAVE dataset is intended as a diagnostic benchmark for evaluating multimodal models that process both audio and visual inputs, and output text. It is specifically designed to:

  • Assess model performance where both audio and visual information are required, avoiding the visual bias present in many existing benchmarks.
  • Disentangle model errors across four core capabilities: action recognition, temporal understanding, audio classification, and audio-visual alignment.
  • Guide model improvement by evaluating across different tasks (multimodal synchronization, sound absence detection, sound discrimination), and providing granular evaluation.

Dataset Structure

The DAVE dataset consists of two main splits: ego4d and epic, each corresponding to curated samples from the Ego4D and EPIC-KITCHENS datasets respectively. Every example is structured to facilitate diagnostic evaluation of audio-visual models across multiple axes: visual, audio, temporal, and multimodal reasoning.

Data Fields

Each example contains the following fields:

  • compressed_video_path: Path to a compressed version of the raw video: unedited video containing 4 events with the original audio.
  • overlayed_event_index: Index of the event which we overlay with an unrelated audio sound (0-indexed, corresponds to position in events list).
  • events: List of dictionaries containing metadata about the events in the video:
    • start, end: Timestamps in format "HH:MM:SS.ffffff".
    • duration: Duration in seconds (float).
    • narration: Natural language descriptions of the action.
    • action: Structured action annotations.
    • raw_narration: Original narration text.
  • event_video_path: Clip extracted from the overlayed event.
  • audio_class: The audio class overlaid in this instance (e.g., "crow", "dog", "car horn").
  • video_with_overlayed_audio_path: Path to the video with audio overlayed on the specified event.
  • silent_video_path: Path to the video without any audio.
  • overlayed_audio_path: Path to the standalone audio clip extracted from the video with the overlayed audio.
  • video_id: Identifier for the video.
  • participant_id: Identifier for the subject or participant (present in EPIC-KITCHENS split, None in Ego4D split).
  • type: Video type or category (e.g., "regular", "none_of_the_above_incorrect_audio", "none_of_the_above_no_sound"), indicating the type of sample.
  • choice_metadata: Dictionary containing multiple-choice evaluation tasks with the following structure:
    • audio_visual_alignment: Audio-visual synchronization task
      • choices: List of 5 action descriptions (4 events + "none of the above")
      • ground_truth: Integer index of the correct choice (0-4)
    • visual_only: Visual-only action recognition task
      • choices: List of 5 action descriptions (4 events + "none of the above")
      • ground_truth: Integer index of the correct choice (0-4)
    • audio_only: Audio-only action recognition task
      • choices: List of 5 action descriptions (4 events + "none of the above")
      • ground_truth: Integer index of the correct choice (0-4)
    • text_only: Text-based reasoning task
      • choices: List of 5 action descriptions (4 events + "none of the above")
      • ground_truth: Integer index of the correct choice (0-4)
    • temporal_ordering: Temporal ordering of events task
      • choices: List of 4 action descriptions
      • ground_truth: List of letter strings representing correct order (e.g., ['(D)', '(A)', '(B)', '(C)'])
    • action_recognition: Single-label action recognition task
      • choices: List of 4 action descriptions
      • ground_truth: Single-element list containing the index of the correct action (e.g., [3])
    • audio_classification: Audio classification task
      • choices: List of 4 audio class labels
      • ground_truth: Integer index of the correct audio class (0-3)

Splits

  • epic: Samples sourced and annotated from EPIC-KITCHENS.
  • ego4d: Samples sourced and annotated from Ego4D.

Each split is structured identically in terms of fields, allowing for consistent benchmarking across domains.

Bias, Risks, and Limitations

Since our dataset is built on top of the Epic Kitchens and the Ego4D dataset, we inherit all risks associated with these two datasets.

Citation

@inproceedings{
radevski2025dave,
title={{DAVE}: Diagnostic benchmark for Audio Visual Evaluation},
author={Gorjan Radevski and Teodora Popordanoska and Matthew B. Blaschko and Tinne Tuytelaars},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2025},
url={https://openreview.net/forum?id=4ZAX1NT0ms}
}

Contact

Reach out to either Gorjan at firstname.lastname@gmail.com or Teodora at: firstname.lastname@kuleuven.be

Downloads last month
207