Enhance dataset card: Add task categories, tags, paper & code links, sample usage, and full README details
#5
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,4 +1,8 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
datasets:
|
| 3 |
- train
|
| 4 |
configs:
|
|
@@ -7,22 +11,38 @@ configs:
|
|
| 7 |
- split: train
|
| 8 |
path: train.csv
|
| 9 |
default: true
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
-
|
| 13 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
-
# MRSAudio:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
-
Humans rely on multisensory integration to perceive spatial environments, where auditory cues enable sound source localization in three-dimensional space.
|
| 19 |
-
Despite the critical role of spatial audio in immersive technologies such as VR/AR, most existing multimodal datasets provide only monaural audio, which limits the development of spatial audio generation and understanding.
|
| 20 |
-
To address these challenges, we introduce MRSAudio, a large-scale multimodal spatial audio dataset designed to advance research in spatial audio understanding and generation.
|
| 21 |
-
MRSAudio spans four distinct components: MRSLife, MRSSpeech, MRSMusic, and MRSSing, covering diverse real-world scenarios.
|
| 22 |
The dataset includes synchronized binaural and ambisonic audio, exocentric and egocentric video, motion trajectories, and fine-grained annotations such as transcripts, phoneme boundaries, lyrics, scores, and prompts.
|
| 23 |
-
To demonstrate the utility and versatility of MRSAudio, we establish five foundational tasks: audio spatialization, and spatial text to speech, spatial singing voice synthesis, spatial music generation and sound event localization and detection.
|
| 24 |
Results show that MRSAudio enables high-quality spatial modeling and supports a broad range of spatial audio research.
|
| 25 |
-
Demos and are available at [MRSAudio](https://mrsaudio.github.io).
|
| 26 |
|
| 27 |

|
| 28 |
|
|
@@ -33,7 +53,16 @@ The dataset of MRSAudio: A Large-Scale Multimodal Recorded Spatial Audio Dataset
|
|
| 33 |
- **MRSSing** (75 h): features high-quality solo singing performances in Chinese, English, German, and French by 20 vocalists, each aligned with time-stamped lyrics and corresponding musical scores.
|
| 34 |
- **MRSMusic** (75 h) offers spatial recordings of 23 Traditional Chinese, Western and Electronic instruments, with symbolic score annotations that support learning-based methods for symbolic-to-audio generation and fine-grained localization.
|
| 35 |
|
| 36 |
-
Together, these four subsets support a broad spectrum of spatial audio research problems, including event detection, sound localization, and binaural or ambisonic audio generation. By pairing spatial audio with synchronized exocentric and egocentric video, geometric tracking, and detailed semantic labels, MRSAudio enables new research directions in multimodal spatial understanding and cross-modal generation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
### File Architecture
|
| 39 |
|
|
@@ -48,4 +77,31 @@ Together, these four subsets support a broad spectrum of spatial audio research
|
|
| 48 |
├── MRSSing
|
| 49 |
├── MRSSpeech
|
| 50 |
└── README.md
|
| 51 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
license: cc-by-4.0
|
| 6 |
datasets:
|
| 7 |
- train
|
| 8 |
configs:
|
|
|
|
| 11 |
- split: train
|
| 12 |
path: train.csv
|
| 13 |
default: true
|
| 14 |
+
task_categories:
|
| 15 |
+
- audio-to-audio
|
| 16 |
+
- text-to-speech
|
| 17 |
+
- text-to-audio
|
| 18 |
+
- automatic-speech-recognition
|
| 19 |
+
- audio-classification
|
| 20 |
+
- video-classification
|
| 21 |
+
tags:
|
| 22 |
+
- spatial-audio
|
| 23 |
+
- multimodal
|
| 24 |
+
- binaural
|
| 25 |
+
- ambisonic
|
| 26 |
+
- video
|
| 27 |
+
- speech
|
| 28 |
+
- music
|
| 29 |
+
- sound-events
|
| 30 |
---
|
| 31 |
|
| 32 |
+
# MRSAudio: A Large-Scale Multimodal Recorded Spatial Audio Dataset with Refined Annotations
|
| 33 |
+
|
| 34 |
+
Paper: [MRSAudio: A Large-Scale Multimodal Recorded Spatial Audio Dataset with Refined Annotations](https://huggingface.co/papers/2510.10396)
|
| 35 |
+
Project Page: [MRSAudio](https://mrsaudio.github.io)
|
| 36 |
+
Code: [https://github.com/MRSAudio/MRSAudio_Main](https://github.com/MRSAudio/MRSAudio_Main)
|
| 37 |
|
| 38 |
+
Humans rely on multisensory integration to perceive spatial environments, where auditory cues enable sound source localization in three-dimensional space.
|
| 39 |
+
Despite the critical role of spatial audio in immersive technologies such as VR/AR, most existing multimodal datasets provide only monaural audio, which limits the development of spatial audio generation and understanding.
|
| 40 |
+
To address these challenges, we introduce MRSAudio, a large-scale multimodal spatial audio dataset designed to advance research in spatial audio understanding and generation.
|
| 41 |
+
MRSAudio spans four distinct components: MRSLife, MRSSpeech, MRSMusic, and MRSSing, covering diverse real-world scenarios.
|
| 42 |
The dataset includes synchronized binaural and ambisonic audio, exocentric and egocentric video, motion trajectories, and fine-grained annotations such as transcripts, phoneme boundaries, lyrics, scores, and prompts.
|
| 43 |
+
To demonstrate the utility and versatility of MRSAudio, we establish five foundational tasks: audio spatialization, and spatial text to speech, spatial singing voice synthesis, spatial music generation and sound event localization and detection.
|
| 44 |
Results show that MRSAudio enables high-quality spatial modeling and supports a broad range of spatial audio research.
|
| 45 |
+
Demos and dataset access are available at [MRSAudio](https://mrsaudio.github.io).
|
| 46 |
|
| 47 |

|
| 48 |
|
|
|
|
| 53 |
- **MRSSing** (75 h): features high-quality solo singing performances in Chinese, English, German, and French by 20 vocalists, each aligned with time-stamped lyrics and corresponding musical scores.
|
| 54 |
- **MRSMusic** (75 h) offers spatial recordings of 23 Traditional Chinese, Western and Electronic instruments, with symbolic score annotations that support learning-based methods for symbolic-to-audio generation and fine-grained localization.
|
| 55 |
|
| 56 |
+
Together, these four subsets support a broad spectrum of spatial audio research problems, including event detection, sound localization, and binaural or ambisonic audio generation. By pairing spatial audio with synchronized exocentric and egocentric video, geometric tracking, and detailed semantic labels, MRSAudio enables new research directions in multimodal spatial understanding and cross-modal generation.
|
| 57 |
+
|
| 58 |
+
### Sample Usage
|
| 59 |
+
|
| 60 |
+
You can download the full dataset using `git lfs`:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
git lfs install
|
| 64 |
+
git clone git@hf.co:datasets/verstar/MRSAudio
|
| 65 |
+
```
|
| 66 |
|
| 67 |
### File Architecture
|
| 68 |
|
|
|
|
| 77 |
├── MRSSing
|
| 78 |
├── MRSSpeech
|
| 79 |
└── README.md
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
### Citation
|
| 83 |
+
|
| 84 |
+
If you find this dataset or code useful for your research, please cite our work:
|
| 85 |
+
|
| 86 |
+
```bibtex
|
| 87 |
+
@article{guo2025mrsaudio,
|
| 88 |
+
title={MRSAudio: A Large-Scale Multimodal Recorded Spatial Audio Dataset with Refined Annotations},
|
| 89 |
+
author={Guo, Wenxiang and Pan, Changhao and Zhu, Zhiyuan and Hu, Xintong and Zhang, Yu and Zhao, Zhou},
|
| 90 |
+
journal={arXiv preprint arXiv:},
|
| 91 |
+
year={2025}
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
### Disclaimer
|
| 98 |
+
|
| 99 |
+
Any organization or individual is prohibited from using any technology mentioned in this paper to generate someone's speech without his/her consent, including but not limited to government leaders, political figures, and celebrities. If you do not comply with this item, you could be in violation of copyright laws.
|
| 100 |
+
|
| 101 |
+
### Relevant Projects
|
| 102 |
+
|
| 103 |
+
Many thanks to:
|
| 104 |
+
- [Make-An-Audio 2](https://github.com/bytedance/Make-An-Audio-2)
|
| 105 |
+
- [STARSS23](https://github.com/sony/audio-visual-seld-dcase2023)
|
| 106 |
+
- [BinauralGrad](https://github.com/microsoft/NeuralSpeech/tree/master/BinauralGrad)
|
| 107 |
+
- [WhisperX](https://github.com/m-bain/whisperX )
|