LeFleur808 JusperLee commited on
Commit
88d5eb6
·
0 Parent(s):

Duplicate from ShandaAI/Hive

Browse files

Co-authored-by: Kai Li <JusperLee@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,330 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ size_categories:
6
+ - 10M<n<100M
7
+ task_categories:
8
+ - audio-to-audio
9
+ pretty_name: Hive Dataset
10
+ arxiv: 2601.22599
11
+ tags:
12
+ - audio
13
+ - sound-separation
14
+ - universal-sound-separation
15
+ - audio-mixing
16
+ - audioset
17
+ dataset_info:
18
+ features:
19
+ - name: mix_id
20
+ dtype: string
21
+ - name: split
22
+ dtype: string
23
+ - name: sample_rate
24
+ dtype: int32
25
+ - name: target_duration
26
+ dtype: float64
27
+ - name: num_sources
28
+ dtype: int32
29
+ - name: sources
30
+ sequence:
31
+ - name: source_id
32
+ dtype: string
33
+ - name: path
34
+ dtype: string
35
+ - name: label
36
+ dtype: string
37
+ - name: crop_start_second
38
+ dtype: float64
39
+ - name: crop_end_second
40
+ dtype: float64
41
+ - name: chunk_start_second
42
+ dtype: float64
43
+ - name: chunk_end_second
44
+ dtype: float64
45
+ - name: rms_gain
46
+ dtype: float64
47
+ - name: snr_db
48
+ dtype: float64
49
+ - name: applied_weight
50
+ dtype: float64
51
+ - name: global_normalization_factor
52
+ dtype: float64
53
+ - name: final_max_amplitude
54
+ dtype: float64
55
+ splits:
56
+ - name: train
57
+ num_examples: 5000000
58
+ - name: validation
59
+ num_examples: 500000
60
+ - name: test
61
+ num_examples: 100000
62
+ ---
63
+
64
+ <h1 align="center">A Semantically Consistent Dataset for Data-Efficient Query-Based Universal Sound Separation</h1>
65
+ <p align="center">
66
+ <img src="asserts/logo.png" alt="Logo" width="250"/>
67
+ </p>
68
+ <p align="center">
69
+ <strong>Kai Li<sup>*</sup>, Jintao Cheng<sup>*</sup>, Chang Zeng, Zijun Yan, Helin Wang, Zixiong Su, Bo Zheng, Xiaolin Hu</strong><br>
70
+ <strong>Tsinghua University, Shanda AI, Johns Hopkins University</strong><br>
71
+ <strong><sup>*</sup>Equal contribution</strong><br>
72
+ <strong>Completed during Kai Li's internship at Shanda AI.</strong><br>
73
+ <a href="https://arxiv.org/abs/2601.22599">📜 Arxiv 2026</a> | <a href="https://github.com/ShandaAI/Hive">💻 Code</a> | <a href="https://shandaai.github.io/Hive/">🎶 Demo</a>
74
+ </p>
75
+
76
+ ## Usage
77
+
78
+ ```python
79
+ from datasets import load_dataset
80
+
81
+ # Load full dataset
82
+ dataset = load_dataset("ShandaAI/Hive")
83
+
84
+ # Load specific split
85
+ train_data = load_dataset("ShandaAI/Hive", split="train")
86
+
87
+ # Streaming mode (recommended for large datasets)
88
+ dataset = load_dataset("ShandaAI/Hive", streaming=True)
89
+ ```
90
+
91
+ ## 📄 Dataset Description
92
+
93
+ **Hive** is a high-quality synthetic dataset designed for **Universal Sound Separation (USS)**. Unlike traditional methods relying on weakly-labeled in-the-wild data, Hive leverages an automated data collection pipeline to mine high-purity single-event segments from complex acoustic environments and synthesizes mixtures with semantically consistent constraints.
94
+
95
+ ### Key Features
96
+
97
+ - **Purity over Scale**: 2.4k hours achieving competitive performance with million-hour baselines (~0.2% data scale)
98
+ - **Single-label Clean Supervision**: Rigorous semantic-acoustic alignment eliminating co-occurrence noise
99
+ - **Semantically Consistent Mixing**: Logic-based co-occurrence matrix ensuring realistic acoustic scenes
100
+ - **High Fidelity**: 44.1kHz sample rate for high-quality audio
101
+
102
+ ### Dataset Scale
103
+
104
+ | Metric | Value |
105
+ |--------|-------|
106
+ | **Training Set Raw Audio** | 2,442 hours |
107
+ | **Val & Test Set Raw Audio** | 292 hours |
108
+ | **Mixed Samples** | 19.6M mixtures |
109
+ | **Total Mixed Duration** | ~22.4k hours |
110
+ | **Label Categories** | 283 classes |
111
+ | **Sample Rate** | 44.1 kHz |
112
+ | **Training Sample Duration** | 4 seconds |
113
+ | **Test Sample Duration** | 10 seconds |
114
+
115
+ ### Dataset Splits
116
+
117
+ | Split | Samples | Description |
118
+ |-------|---------|-------------|
119
+ | Train | 17.5M | Training mixtures (4s duration) |
120
+ | Validation | 1.75M | Validation mixtures |
121
+ | Test | 350k | Test mixtures (10s duration) |
122
+
123
+ ---
124
+
125
+ ## 📂 Dataset Structure
126
+
127
+ ### Directory Organization
128
+
129
+ ```
130
+ hive-datasets-parquet/
131
+ ├── README.md
132
+ ├── train/
133
+ │ └── data.parquet
134
+ ├── validation/
135
+ │ └── data.parquet
136
+ └── test/
137
+ └── data.parquet
138
+ ```
139
+
140
+ Each split contains a single Parquet file with all mixture metadata. The `num_sources` field indicates the number of sources (2-5) for each mixture.
141
+
142
+ ---
143
+
144
+ ## 📋 Data Fields
145
+
146
+ ### JSON Schema
147
+
148
+ Each JSON object contains complete generation parameters for reproducing a mixture sample:
149
+
150
+ ```python
151
+ {
152
+ "mix_id": "sample_00000003",
153
+ "split": "train",
154
+ "sample_rate": 44100,
155
+ "target_duration": 4.0,
156
+ "num_sources": 2,
157
+ "sources": {
158
+ "source_id": ["s1", "s2"],
159
+ "path": ["relative/path/to/audio1", "relative/path/to/audio2"],
160
+ "label": ["Ocean", "Rain"],
161
+ "crop_start_second": [1.396, 2.5],
162
+ "crop_end_second": [5.396, 6.5],
163
+ "chunk_start_second": [35.0, 20.0],
164
+ "chunk_end_second": [45.0, 30.0],
165
+ "rms_gain": [3.546, 2.1],
166
+ "snr_db": [0.0, -3.0],
167
+ "applied_weight": [3.546, 1.487]
168
+ },
169
+ "global_normalization_factor": 0.786,
170
+ "final_max_amplitude": 0.95
171
+ }
172
+ ```
173
+
174
+ ### Field Descriptions
175
+
176
+ #### 1. Basic Info Fields
177
+
178
+ | Field | Type | Description |
179
+ |-------|------|-------------|
180
+ | `mix_id` | string | Unique identifier for the mixture task |
181
+ | `split` | string | Dataset partition (`train` / `validation` / `test`) |
182
+ | `sample_rate` | int32 | Audio sample rate in Hz (44100) |
183
+ | `target_duration` | float64 | Target duration in seconds (4.0 for train, 10.0 for test) |
184
+ | `num_sources` | int32 | Number of audio sources in this mixture (2-5) |
185
+
186
+ #### 2. Source Information (`sources`)
187
+
188
+ Metadata required to reproduce the mixing process for each audio source. Stored in columnar format (dict of lists) for efficient Parquet storage:
189
+
190
+ | Field | Type | Description |
191
+ |-------|------|-------------|
192
+ | `source_id` | list[string] | Source identifiers (`s1`, `s2`, ...) |
193
+ | `path` | list[string] | Relative paths to the source audio files |
194
+ | `label` | list[string] | AudioSet ontology labels for each source |
195
+ | `chunk_start_second` | list[float64] | Start times (seconds) for reading from original audio files |
196
+ | `chunk_end_second` | list[float64] | End times (seconds) for reading from original audio files |
197
+ | `crop_start_second` | list[float64] | Precise start positions (seconds) for reproducible random extraction |
198
+ | `crop_end_second` | list[float64] | Precise end positions (seconds) for reproducible random extraction |
199
+ | `rms_gain` | list[float64] | Energy normalization coefficients: $\text{target\_rms} / \text{current\_rms}$ |
200
+ | `snr_db` | list[float64] | Signal-to-noise ratios in dB assigned to each source |
201
+ | `applied_weight` | list[float64] | Final scaling weights: $\text{rms\_gain} \times 10^{(\text{snr\_db} / 20)}$ |
202
+
203
+ #### 3. Mixing Parameters
204
+
205
+ Global processing parameters after combining multiple audio sources:
206
+
207
+ | Field | Type | Description |
208
+ |-------|------|-------------|
209
+ | `global_normalization_factor` | float64 | Anti-clipping scaling coefficient: $0.95 / \text{max\_val}$ |
210
+ | `final_max_amplitude` | float64 | Maximum amplitude threshold (0.95) to prevent bit-depth overflow |
211
+
212
+ ### Detailed Field Explanations
213
+
214
+ #### Cropping Logic
215
+ - `chunk_start/end_second`: Defines the reading interval from the original audio file
216
+ - `crop_start/end_second`: Records the precise random cropping position, ensuring exact reproducibility across runs
217
+
218
+ #### Energy Normalization (`rms_gain`)
219
+ Adjusts different audio sources to the same energy level:
220
+ $$\text{rms\_gain} = \frac{\text{target\_rms}}{\text{current\_rms}}$$
221
+
222
+ #### Signal-to-Noise Ratio (`snr_db`)
223
+ The SNR value assigned to each source, sampled from a predefined range using `random.uniform(snr_range[0], snr_range[1])`.
224
+
225
+ #### Applied Weight
226
+ The comprehensive scaling weight combining energy normalization and SNR adjustment:
227
+ $$\text{applied\_weight} = \text{rms\_gain} \times 10^{(\text{snr\_db} / 20)}$$
228
+
229
+ This is the final coefficient applied to the original waveform.
230
+
231
+ #### Global Normalization Factor
232
+ Prevents audio clipping after mixing:
233
+ $$\text{global\_normalization\_factor} = \frac{0.95}{\text{max\_val}}$$
234
+
235
+ Where `max_val` is the **peak amplitude (absolute value)** of the mixed signal.
236
+
237
+ ---
238
+
239
+ ## 🔧 Usage
240
+
241
+ ### Download Metadata
242
+
243
+ ```python
244
+ from datasets import load_dataset
245
+
246
+ # Load specific split and mixture type
247
+ dataset = load_dataset("ShandaAI/Hive", split="train")
248
+ ```
249
+
250
+ ### Generate Mixed Audio
251
+
252
+ Please refer to the [official GitHub repository](https://github.com/ShandaAI/Hive) for the complete audio generation pipeline.
253
+
254
+ ```bash
255
+ # Clone the repository
256
+ git clone https://github.com/ShandaAI/Hive.git
257
+ cd Hive/hive_dataset
258
+
259
+ # Generate mixtures from metadata
260
+ python mix_from_metadata/mix_from_metadata.py \
261
+ --metadata_dir /path/to/downloaded/metadata \
262
+ --output_dir ./hive_dataset \
263
+ --dataset_paths dataset_paths.json \
264
+ --num_processes 16
265
+ ```
266
+
267
+ ---
268
+
269
+ ## 📚 Source Datasets
270
+
271
+ Hive integrates **12 public datasets** to construct a long-tailed acoustic space:
272
+
273
+ | # | Dataset | Clips | Duration (h) | License |
274
+ |---|---------|-------|--------------|---------|
275
+ | 1 | BBC Sound Effects | 369,603 | 1,020.62 | Remix License |
276
+ | 2 | AudioSet | 326,890 | 896.61 | CC BY |
277
+ | 3 | VGGSound | 115,191 | 319.10 | CC BY 4.0 |
278
+ | 4 | MUSIC21 | 32,701 | 90.28 | YouTube Standard |
279
+ | 5 | FreeSound | 17,451 | 46.90 | CC0/BY/BY-NC |
280
+ | 6 | ClothoV2 | 14,759 | 38.19 | Non-Commercial Research |
281
+ | 7 | Voicebank-DEMAND | 12,376 | 9.94 | CC BY 4.0 |
282
+ | 8 | AVE | 3,054 | 6.91 | CC BY-NC-SA |
283
+ | 9 | SoundBible | 2,501 | 5.78 | CC BY 4.0 |
284
+ | 10 | DCASE | 1,969 | 5.46 | Academic Use |
285
+ | 11 | ESC50 | 1,433 | 1.99 | CC BY-NC 3.0 |
286
+ | 12 | FSD50K | 636 | 0.80 | Creative Commons |
287
+ | | **Total** | **898,564** | **2,442.60** | |
288
+
289
+ **Important Note**: This repository releases only **metadata** (JSON files containing mixing parameters and source references) for reproducibility. Users must independently download and prepare the source datasets according to their respective licenses.
290
+
291
+ ---
292
+
293
+ ## 📖 Citation
294
+
295
+ If you use this dataset, please cite:
296
+
297
+ ```bibtex
298
+ @article{li2026hive,
299
+ title={A Semantically Consistent Dataset for Data-Efficient Query-Based Universal Sound Separation},
300
+ author={Li, Kai and Cheng, Jintao and Zeng, Chang and Yan, Zijun and Wang, Helin and Su, Zixiong and Zheng, Bo and Hu, Xiaolin},
301
+ journal={arXiv preprint arXiv:2601.22599},
302
+ year={2026}
303
+ }
304
+ ```
305
+
306
+ ---
307
+
308
+ ## ⚖️ License
309
+
310
+ This dataset metadata is released under the **Apache License 2.0**.
311
+
312
+ Please note that the source audio files are subject to their original licenses. Users must comply with the respective licenses when using the source datasets.
313
+
314
+ ---
315
+
316
+ ## 🙏 Acknowledgments
317
+
318
+ We extend our gratitude to the researchers and organizations who curated the foundational datasets that made Hive possible:
319
+
320
+ - **BBC Sound Effects** - Professional-grade recordings with broadcast-level fidelity
321
+ - **AudioSet** (Google) - Large-scale audio benchmark
322
+ - **VGGSound** (University of Oxford) - Real-world acoustic diversity
323
+ - **FreeSound** (MTG-UPF) - Rich crowdsourced soundscapes
324
+ - And all other contributing datasets
325
+
326
+ ---
327
+
328
+ ## 📬 Contact
329
+
330
+ For questions or issues, please open an issue on the [GitHub repository](https://github.com/ShandaAI/Hive) or contact the authors.
asserts/logo.png ADDED

Git LFS Details

  • SHA256: cc439987217d8b8dbc3d9e48eab565f2e2e15199f68a4d245e5097d202093298
  • Pointer size: 132 Bytes
  • Size of remote file: 4.14 MB
test/data.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e9916476565357dcf675eee888232df3d574e82851ce93676b70050ccd477e9
3
+ size 15678703
train/data.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:074fab7cf25d100e0b4617706347816b5566b14d26ca94e14dece8522814b304
3
+ size 1106516997
validation/data.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ffc6c303b57cdaa2d2ec6d5edf317a2652d2200a3472fc7a31d2e795c89ceff
3
+ size 110792699