Improve dataset card: update task category, add usage & citation
Browse filesThis PR enhances the dataset card for **MRSDrama**, the dataset presented in [ISDrama: Immersive Spatial Drama Generation through Multimodal Prompting](https://huggingface.co/papers/2504.20630).
Key changes include:
- Refined `task_categories` metadata from `text-to-speech` to `text-to-audio` to better capture the dataset's focus on immersive and spatial audio generation.
- Expanded content with a summary of the paper and the dataset's role.
- Integrated comprehensive "Sample Usage" and "Evaluation" guidelines from the GitHub repository, including details on dependencies, data preparation, metrics, and running the evaluation.
- Added the BibTeX citation for convenient referencing.
- Included details on the dataset's architecture for easier understanding of its structure.
- Added a badge for the project demo page for improved discoverability.
|
@@ -1,35 +1,144 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-nc-sa-4.0
|
| 3 |
-
task_categories:
|
| 4 |
-
- text-to-speech
|
| 5 |
language:
|
| 6 |
- zh
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
tags:
|
| 8 |
- spatial-audio
|
| 9 |
- drama
|
| 10 |
- binaural
|
| 11 |
-
size_categories:
|
| 12 |
-
- n<1K
|
| 13 |
---
|
| 14 |
-
|
|
|
|
| 15 |
|
| 16 |
#### Yu Zhang*, Wenxiang Guo*, Changhao Pan*, Zhiyuan Zhu*, Tao Jin, Zhou Zhao | Zhejiang University
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
-
[](https://arxiv.org/abs/2504.20630)
|
| 21 |
[](https://github.com/AaronZ345/ISDrama)
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
-
We provide the **full corpus for free** in this repository.
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
| 30 |
|
| 31 |
## Updates
|
| 32 |
|
| 33 |
- 2025.07: We released the evaluation code of MRSDrama!
|
| 34 |
- 2025.07: We released the full dataset of MRSDrama!
|
| 35 |
-
- 2025.07: ISDrama is accepted by ACMMM 2025!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- zh
|
| 4 |
+
license: cc-by-nc-sa-4.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- n<1K
|
| 7 |
+
task_categories:
|
| 8 |
+
- text-to-audio
|
| 9 |
tags:
|
| 10 |
- spatial-audio
|
| 11 |
- drama
|
| 12 |
- binaural
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
+
|
| 15 |
+
# ISDrama: Immersive Spatial Drama Generation through Multimodal Prompting
|
| 16 |
|
| 17 |
#### Yu Zhang*, Wenxiang Guo*, Changhao Pan*, Zhiyuan Zhu*, Tao Jin, Zhou Zhao | Zhejiang University
|
| 18 |
|
| 19 |
+
This repository contains **MRSDrama**, the dataset for the paper [ISDrama: Immersive Spatial Drama Generation through Multimodal Prompting](https://arxiv.org/abs/2504.20630).
|
| 20 |
|
| 21 |
+
[](https://arxiv.org/abs/2504.20630)
|
| 22 |
[](https://github.com/AaronZ345/ISDrama)
|
| 23 |
+
[](https://aaronz345.github.io/ISDramaDemo)
|
| 24 |
+
|
| 25 |
+
## Introduction
|
| 26 |
|
| 27 |
+
The **MRSDrama** dataset supports research in multimodal immersive spatial drama generation, a task focused on creating continuous multi-speaker binaural speech with dramatic prosody based on multimodal prompts. This task requires simultaneous modeling of spatial information and dramatic prosody based on multimodal inputs. To address these challenges and the high data collection costs, this work introduces MRSDrama, the first multimodal recorded spatial drama dataset. It contains binaural drama audios, scripts, videos, geometric poses, and textual prompts.
|
| 28 |
|
| 29 |
+
We provide the **full corpus for free** in this repository. You can also visit our [Demo Page](https://aaronz345.github.io/ISDramaDemo) for audio samples from our dataset and model results.
|
| 30 |
|
| 31 |
+
**Please note that by using MRSDrama, you are accepting the terms of its [license](./dataset_license.md).**
|
| 32 |
|
| 33 |
+
## Data Architecture
|
| 34 |
+
|
| 35 |
+
Our dataset is organized hierarchically. Each top-level folder contains a set of dramas. Each folder contains a subfolder with cut WAV files, an MP4 video file, and a JSON file containing all annotation information.
|
| 36 |
|
| 37 |
## Updates
|
| 38 |
|
| 39 |
- 2025.07: We released the evaluation code of MRSDrama!
|
| 40 |
- 2025.07: We released the full dataset of MRSDrama!
|
| 41 |
+
- 2025.07: ISDrama is accepted by ACMMM 2025!
|
| 42 |
+
|
| 43 |
+
## Evaluation of ISDrama
|
| 44 |
+
|
| 45 |
+
The evaluation process for ISDrama is based on the code and models of "BAT: Learning to Reason about Spatial Sounds with Large Language Models".
|
| 46 |
+
|
| 47 |
+
### Dependencies
|
| 48 |
+
|
| 49 |
+
A suitable [conda](https://conda.io/) environment named `isdrama_eva` can be created and activated with:
|
| 50 |
+
|
| 51 |
+
```bash
|
| 52 |
+
conda env create -f environment.yml
|
| 53 |
+
bash timm_patch/patch.sh
|
| 54 |
+
conda activate isdrama_eva
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
### Checkpoint Preparation
|
| 58 |
+
|
| 59 |
+
Please download the finetuned `BAT` encoder [checkpoint](https://huggingface.co/datasets/zhisheng01/SpatialAudio/blob/main/SpatialAST/finetuned.pth) and place it at:
|
| 60 |
+
|
| 61 |
+
```bash
|
| 62 |
+
./evaluation/ckpt/finetuned.pth
|
| 63 |
+
```
|
| 64 |
+
Make sure the path exists (create the `ckpt` directory if necessary).
|
| 65 |
+
|
| 66 |
+
### Data Preparation
|
| 67 |
+
|
| 68 |
+
For evaluation, you must prepare paired ground‑truth audio and generated audio. Place them respectively in:
|
| 69 |
+
|
| 70 |
+
```bash
|
| 71 |
+
./evaluation/data/gt
|
| 72 |
+
./evaluation/data/infer
|
| 73 |
+
```
|
| 74 |
+
The expected directory layout is:
|
| 75 |
+
|
| 76 |
+
```
|
| 77 |
+
.
|
| 78 |
+
├── gt
|
| 79 |
+
│ ├── 0000.wav
|
| 80 |
+
│ ├── 0001.wav
|
| 81 |
+
│ ├── 0002.wav
|
| 82 |
+
│ └── 0003.wav
|
| 83 |
+
└── infer
|
| 84 |
+
├── 0000.wav
|
| 85 |
+
├── 0001.wav
|
| 86 |
+
├── 0002.wav
|
| 87 |
+
└── 0003.wav
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
Important:
|
| 91 |
+
|
| 92 |
+
- The files inside gt and infer must correspond one‑to‑one.
|
| 93 |
+
- Filenames and counts must match exactly (e.g., `gt/0002.wav` pairs with `infer/0002.wav`).
|
| 94 |
+
- Ensure sampling rates and channel configurations are consistent if required by downstream metrics.
|
| 95 |
+
|
| 96 |
+
### Metrics
|
| 97 |
+
|
| 98 |
+
We adopt various metrics to assess performance:
|
| 99 |
+
|
| 100 |
+
**Semantic & Acoustic Metrics:**
|
| 101 |
+
- **Character Error Rate (CER)**: Assesses transcript/content accuracy.
|
| 102 |
+
- **Cosine Similarity (SIM)**: Measures speaker timbre similarity between the generated audio and the prompt/reference audio (e.g., via speaker embeddings).
|
| 103 |
+
- **F0 Frame Error (FFE)**: Evaluates prosody fidelity by comparing voiced/unvoiced decisions and pitch (F0) frames.
|
| 104 |
+
|
| 105 |
+
**Spatial Metrics:**
|
| 106 |
+
- **IPD MAE**: Mean Absolute Error between ground‑truth and generated Interaural Phase Differences.
|
| 107 |
+
- **ILD MAE**: Mean Absolute Error between ground‑truth and generated Interaural Level Differences.
|
| 108 |
+
- **Angle Cosine Similarity (ANG Cos)**: Cosine similarity between ground‑truth and generated direction (azimuth / elevation) angle embeddings.
|
| 109 |
+
- **Distance Cosine Similarity (Dis Cos)**: Cosine similarity between ground‑truth and generated distance embeddings.
|
| 110 |
+
|
| 111 |
+
> **Note:** Cosine‑based spatial scores are in the range [-1, 1], with higher values indicating closer alignment of spatial embeddings.
|
| 112 |
+
|
| 113 |
+
### Running the Evaluation
|
| 114 |
+
|
| 115 |
+
Run the following script to perform the evaluation pipeline:
|
| 116 |
+
|
| 117 |
+
```bash
|
| 118 |
+
cd evaluation
|
| 119 |
+
bash ./evaluate/eval.sh
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
The script `evaluate/eval.sh` executes the following three stages:
|
| 123 |
+
1. Extract angle and distance embeddings using the BAT encoder.
|
| 124 |
+
2. Extract IPD & ILD features from paired ground‑truth and generated stereo audio.
|
| 125 |
+
3. Compute metrics: MAE (for IPD / ILD) and cosine similarities (for angle and distance).
|
| 126 |
+
|
| 127 |
+
> Ensure that ground‑truth and generated audio files are correctly paired and preprocessed before running the script.
|
| 128 |
+
|
| 129 |
+
## Citation
|
| 130 |
+
|
| 131 |
+
If you find this dataset or code useful in your research, please cite our work:
|
| 132 |
+
|
| 133 |
+
```bib
|
| 134 |
+
@article{zhang2025isdrama,
|
| 135 |
+
title={ISDrama: Immersive Spatial Drama Generation through Multimodal Prompting},
|
| 136 |
+
author={Zhang, Yu and Guo, Wenxiang and Pan, Changhao and Zhu, Zhiyuan and Jin, Tao and Zhao, Zhou},
|
| 137 |
+
journal={arXiv preprint arXiv:2504.20630},
|
| 138 |
+
year={2025}
|
| 139 |
+
}
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
## Disclaimer
|
| 143 |
+
|
| 144 |
+
Any organization or individual is prohibited from using any technology mentioned in this paper to generate someone's speech without his/her consent, including but not limited to government leaders, political figures, and celebrities. If you do not comply with this item, you could be in violation of copyright laws.
|