metadata
license: mit
Source: https://www.youtube.com/@visualaccent/videos All rights belong to the original dataset creator.
VADA-AVSR: an audio-visual dataset of non-native English ("accents") and English varieties ("dialects")
We preprocessed the Visual Accent and Dialect Archive (https://archive.mith.umd.edu/mith-2020/vada/index.html) for audio-visual speech recognition (AVSR), speech recognition (ASR), and visual speech recognition/lip-reading (VSR).
This version currently only contains read speech, specifically, readings of the Rainbow Passage, which contains every sound in English.
Citation:
@misc{vada,
title = {{Visual Accent and Dialect Archive}},
author = {Leigh Wilson Smiley},
year = {2019},
note = {Accessed on October 11, 2025 at \url{https://web.archive.org/web/20230203145410/https://visualaccentdialectarchive.com/}},
}
@misc{vada_avsr,
title = {VADA-AVSR: Audio-Visual Speech Recognition with Non-Standard Speech},
author = {Anya Ji and Kalvin Chang and David Chan and Alane Suhr},
year = {2025},
note = {Accessed on DATE at \url{https://huggingface.co/datasets/Berkeley-NLP/visual_accent_dialect_archive}},
}
Loading the dataset (example)
from huggingface_hub import snapshot_download
from datasets import load_dataset, Audio, Video
repo = "Berkeley-NLP/visual_accent_dialect_archive"
# 1) Materialize the repo locally (cached by HF)
root = snapshot_download(repo_id=repo, repo_type="dataset")
print(root)
# 2) Load the split CSV from the snapshot
ds = load_dataset("csv", data_files={"test": f"{root}/test.csv"}, split="test")
print(ds)
# 3) Convert repo-relative paths -> absolute local paths
def absolutize(ex):
ex["audio_segment_path"] = f"{root}/" + ex["audio_segment_path"]
ex["video_noaudio_segment_path"] = f"{root}/" + ex["video_noaudio_segment_path"]
ex["video_withaudio_segment_path"] = f"{root}/" + ex["video_withaudio_segment_path"]
return ex
ds = ds.map(absolutize)
ex = ds[0]
print(ex)
# 4) Decode with HF decoders (Optional)
# ds= ds.rename_columns({
# "audio_segment_path": "audio",
# "video_noaudio_segment_path": "video_no_audio",
# "video_withaudio_segment_path": "video_with_audio",
# })
# ds = ds.cast_column("audio", Audio())
# ds = ds.cast_column("video_no_audio", Video())
# ds = ds.cast_column("video_with_audio", Video())
# ex = ds[0]
# print(ex["audio"]["sampling_rate"], ex["audio"]["array"].shape)
# video = ex["video_no_audio"]
# frame0 = next(iter(video))
# print(frame0.shape)