verstar commited on
Commit
c535e9f
·
verified ·
1 Parent(s): 9ea0cb5

Upload folder using huggingface_hub

Browse files
20250307-191000-1-hy-xx-wjh-fjc/20250307-191000-1-hy-xx-wjh-fjc_metadata.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "start_time": "20250307-191000",
3
+ "vid_time": "20250307-191000",
4
+ "scene": 1,
5
+ "name": [
6
+ "hy",
7
+ "xx",
8
+ "wjh",
9
+ "fjc"
10
+ ],
11
+ "index": [
12
+ 0,
13
+ 1,
14
+ 2,
15
+ 3
16
+ ],
17
+ "log_time": "20250307-191000",
18
+ "height": [
19
+ 1.07,
20
+ 1.12,
21
+ 1.07,
22
+ 1.05
23
+ ],
24
+ "camera_pos": [
25
+ 6.98,
26
+ 0.33,
27
+ 1.64,
28
+ 51.32989436070831,
29
+ -17.34256386182449,
30
+ 3.218943607085306
31
+ ],
32
+ "ear_orientation": [
33
+ 0,
34
+ 1
35
+ ],
36
+ "ear_position": [
37
+ 5.76,
38
+ 0.04,
39
+ 0.59
40
+ ],
41
+ "spker2tag": [2, 0, 1, 3]
42
+ }
20250307-191000-1-hy-xx-wjh-fjc/20250307-191000-1-hy-xx-wjh-fjc_缩混_cut_1.260_3.135_2.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c695b2fedfa64623fb971d21d7d0477a54ebc1392d9d446ea73253f9d5695857
3
+ size 1608
20250307-191000-1-hy-xx-wjh-fjc/20250307-191000-1-hy-xx-wjh-fjc_缩混_cut_1.260_3.135_2.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc3d0ee32e48507e2340c34460ca77679c13640f6062e0a4957ae57a44ef20b7
3
+ size 720088
20250307-191000-1-hy-xx-wjh-fjc/20250307-191000-1-hy-xx-wjh-fjc_缩混_cut_3.135_4.845_0.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb5a71cc04eec0316384dad7e50f39be4541af21ccd8ec810d3f65676509ae9d
3
+ size 1488
20250307-191000-1-hy-xx-wjh-fjc/20250307-191000-1-hy-xx-wjh-fjc_缩混_cut_3.135_4.845_0.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80898f450b5e6c1d3a08d05c744903653c2838c57b2e2e5503723b24e853cdd5
3
+ size 656728
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - train
4
+ - test
5
+ configs:
6
+ - config_name: default
7
+ data_files:
8
+ - split: train
9
+ path: train.csv
10
+ - split: test
11
+ path: test.csv
12
+ default: true
13
+ license: cc-by-4.0
14
+ ---
15
+
16
+ # MRSAudio: A Large-Scale Multimodal Recorded Spatial Audio Dataset with Refined Annotations
17
+
18
+ Humans rely on multisensory integration to perceive spatial environments, where auditory cues enable sound source localization in three-dimensional space.
19
+ Despite the critical role of spatial audio in immersive technologies such as VR/AR, most existing multimodal datasets provide only monaural audio, which limits the development of spatial audio generation and understanding.
20
+ To address these challenges, we introduce MRSAudio, a large-scale multimodal spatial audio dataset designed to advance research in spatial audio understanding and generation.
21
+ MRSAudio spans four distinct components: MRSLife, MRSSpeech, MRSMusic, and MRSSing, covering diverse real-world scenarios.
22
+ The dataset includes synchronized binaural and ambisonic audio, exocentric and egocentric video, motion trajectories, and fine-grained annotations such as transcripts, phoneme boundaries, lyrics, scores, and prompts.
23
+ To demonstrate the utility and versatility of MRSAudio, we establish five foundational tasks: audio spatialization, and spatial text to speech, spatial singing voice synthesis, spatial music generation and sound event localization and detection.
24
+ Results show that MRSAudio enables high-quality spatial modeling and supports a broad range of spatial audio research.
25
+ Demos and are available at [MRSAudio](https://mrsaudio.github.io).
26
+
27
+ ![image](head.png)
28
+
29
+ The dataset of MRSAudio: A Large-Scale Multimodal Recorded Spatial Audio Dataset with Refined Annotations. Our dataset contains 500-hour large-scale multimodal spatial audio. It integrates high-fidelity spatial recordings with synchronized video, 3D pose tracking, and rich semantic annotations, enabling comprehensive modeling of real-world auditory scenes. The dataset comprises four subsets, each targeting distinct tasks and scenarios.
30
+
31
+ - **MRSLife** (150 h): captures daily activities such as board games, cooking, and office work, using egocentric video and FOA audio annotated with sound events and speech transcripts.
32
+ - **MRSSpeech** (200 h): includes binaural conversations from 50 speakers across diverse indoor environments, paired with video, 3D source positions, and complete scripts.
33
+ - **MRSSing** (75 h): features high-quality solo singing performances in Chinese, English, German, and French by 20 vocalists, each aligned with time-stamped lyrics and corresponding musical scores.
34
+ - **MRSMusic** (75 h) offers spatial recordings of 23 Traditional Chinese, Western and Electronic instruments, with symbolic score annotations that support learning-based methods for symbolic-to-audio generation and fine-grained localization.
35
+
36
+ Together, these four subsets support a broad spectrum of spatial audio research problems, including event detection, sound localization, and binaural or ambisonic audio generation. By pairing spatial audio with synchronized exocentric and egocentric video, geometric tracking, and detailed semantic labels, MRSAudio enables new research directions in multimodal spatial understanding and cross-modal generation.
37
+
38
+ ### File Architecture
39
+
40
+ ```
41
+ .
42
+ ├── MRSLife
43
+ │ ├── MRSCook
44
+ │ ├── MRSDialogue
45
+ │ ├── MRSSound
46
+ │ └── MRSSports
47
+ ├── MRSMusic
48
+ ├── MRSSing
49
+ ├── MRSSpeech
50
+ └── README.md
51
+ ```
head.png ADDED

Git LFS Details

  • SHA256: eba56be2d8d0518dc451109df69c5c9cde1e798a1eb0a0200e58fdd2d46cd8df
  • Pointer size: 132 Bytes
  • Size of remote file: 2.5 MB