Add dataset card and documentation for JavisDiT++ datasets
Browse filesHi! I'm Niels from the Hugging Face community science team. I've updated the dataset card to include more information about the JavisDiT++ datasets (JavisBench and JavisData-Audio), including links to the paper, project page, and source code. This will help researchers better discover and use your work on the Hub.
README.md
CHANGED
|
@@ -1,3 +1,67 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-to-audio
|
| 5 |
+
tags:
|
| 6 |
+
- joint-audio-video-generation
|
| 7 |
+
- multimodal
|
| 8 |
+
- sounding-video
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# JavisDiT++ Datasets
|
| 12 |
+
|
| 13 |
+
[**Project Page**](https://javisverse.github.io/JavisDiT2-page/) | [**Paper**](https://huggingface.co/papers/2602.19163) | [**GitHub**](https://github.com/JavisVerse/JavisDiT)
|
| 14 |
+
|
| 15 |
+
This repository contains data associated with **JavisDiT++**, a concise yet powerful framework for unified modeling and optimization of Joint Audio-Video Generation (JAVG). It produces synchronized and semantically aligned sound and vision from textual descriptions.
|
| 16 |
+
|
| 17 |
+
## Dataset Description
|
| 18 |
+
|
| 19 |
+
The JavisDiT project releases several data components:
|
| 20 |
+
|
| 21 |
+
- **JavisBench**: A comprehensive benchmark for evaluating joint audio-video generation across quality, consistency, and synchrony.
|
| 22 |
+
- **JavisData-Audio**: Audio pre-training data used to initialize text-to-audio generation.
|
| 23 |
+
|
| 24 |
+
### Data Structure
|
| 25 |
+
Training and evaluation entries are managed using `.csv` files containing metadata such as video/audio paths, number of frames, resolution, and textual descriptions.
|
| 26 |
+
|
| 27 |
+
| Column | Description |
|
| 28 |
+
| --- | --- |
|
| 29 |
+
| `path` | Path to the video file |
|
| 30 |
+
| `id` | Unique identifier |
|
| 31 |
+
| `num_frames` | Total frames |
|
| 32 |
+
| `audio_path` | Path to the corresponding audio file |
|
| 33 |
+
| `text` | Textual description/prompt |
|
| 34 |
+
|
| 35 |
+
## Usage
|
| 36 |
+
|
| 37 |
+
You can download the benchmark data or pre-processed audio dataset using the Hugging Face CLI:
|
| 38 |
+
|
| 39 |
+
### Download JavisBench
|
| 40 |
+
```bash
|
| 41 |
+
hf download --repo-type dataset JavisVerse/JavisBench --local-dir data/eval/JavisBench
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
### Download JavisData-Audio
|
| 45 |
+
```bash
|
| 46 |
+
hf download --repo-type dataset JavisVerse/JavisData-Audio --local-dir /path/to/audio
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
## Citation
|
| 50 |
+
|
| 51 |
+
If you find JavisDiT++ useful in your research, please cite the following papers:
|
| 52 |
+
|
| 53 |
+
```bibtex
|
| 54 |
+
@inproceedings{liu2026javisdit++,
|
| 55 |
+
title = {JavisDiT++: Unified Modeling and Optimization for Joint Audio-Video Generation},
|
| 56 |
+
author = {Liu, Kai and Zheng, Yanhao and Wang, Kai and Wu, Shengqiong and Zhang, Rongjunchen and Luo, Jiebo and Hatzinakos, Dimitrios and Liu, Ziwei and Fei, Hao and Chua, Tat-Seng},
|
| 57 |
+
conference = {The Fourteenth International Conference on Learning Representations},
|
| 58 |
+
year = {2026},
|
| 59 |
+
}
|
| 60 |
+
|
| 61 |
+
@inproceedings{liu2025javisdit,
|
| 62 |
+
title = {JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical Spatio-Temporal Prior Synchronization},
|
| 63 |
+
author = {Liu, Kai and Li, Wei and Chen, Lai and Wu, Shengqiong and Zheng, Yanhao and Ji, Jiayi and Zhou, Fan and Luo, Jiebo and Liu, Ziwei and Fei, Hao and Chua, Tat-Seng},
|
| 64 |
+
conference = {The Fourteenth International Conference on Learning Representations},
|
| 65 |
+
year = {2026},
|
| 66 |
+
}
|
| 67 |
+
```
|