Datasets:
The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Audiovisual Translation Dubbing Dataset
A paired video dataset for training video dubbing and lip-sync models, as described in the paper "JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion".
📄 Project: justdubit.github.io
💻 Code: github.com/justdubit/just-dub-it
🤗 Model: huggingface.co/justdubit/justdubit
Dataset Description
This dataset contains paired video samples for training synchronized audio-video dubbing models. Each sample includes:
- Target video: The dubbed output with translated speech and synchronized lip movements
- Reference video: The original source video used as conditioning
- Face segmentation masks: (Optional) Binary masks highlighting facial regions for masked loss training
- Text captions: Structured prompts with speaker description, language, and dialogue
Intended Use
- Training LoRA adapters for video dubbing tasks
- Fine-tuning audio-video models for lip-sync generation
- Research in multimodal translation and dubbing
Dataset Structure
The dataset is organized as follows:
audiovisual_translation_dub/dataset.json- Metadata file with captions and pathstarget/- Dubbed output videos001.mp4002.mp4- ...
reference/- Original source videos001.mp4002.mp4- ...
face_masks/- (Optional) Face segmentation masks001.mp4002.mp4- ...
Data Format
The dataset.json file contains entries in the following format:
[ { "caption": "The woman is speaking English, saying: 'Hello, how are you today?'", "media_path": "videos/target_001.mp4", "reference_path": "references/reference_001.mp4", "mask_path": "masks/mask_001.mp4" } ]
- Downloads last month
- 45