File size: 10,761 Bytes
bd13b48 862fc7e bd13b48 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 |
---
license: apache-2.0
task_categories:
- audio-classification
- audio-to-audio
language:
- en
tags:
- audio
- sound-separation
- universal-sound-separation
- audio-mixing
- audioset
pretty_name: Hive Dataset
size_categories:
- 10M<n<100M
dataset_info:
features:
- name: mix_id
dtype: string
- name: split
dtype: string
- name: sample_rate
dtype: int32
- name: target_duration
dtype: float64
- name: num_sources
dtype: int32
- name: sources
sequence:
- name: source_id
dtype: string
- name: path
dtype: string
- name: label
dtype: string
- name: crop_start_second
dtype: float64
- name: crop_end_second
dtype: float64
- name: chunk_start_second
dtype: float64
- name: chunk_end_second
dtype: float64
- name: rms_gain
dtype: float64
- name: snr_db
dtype: float64
- name: applied_weight
dtype: float64
- name: global_normalization_factor
dtype: float64
- name: final_max_amplitude
dtype: float64
splits:
- name: train
num_examples: 5000000
- name: validation
num_examples: 500000
- name: test
num_examples: 100000
---
<h1 align="center">A Semantically Consistent Dataset for Data-Efficient Query-Based Universal Sound Separation</h1>
<p align="center">
<img src="asserts/logo.png" alt="Logo" width="250"/>
</p>
<p align="center">
<strong>Kai Li<sup>*</sup>, Jintao Cheng<sup>*</sup>, Chang Zeng, Zijun Yan, Helin Wang, Zixiong Su, Bo Zheng, Xiaolin Hu</strong><br>
<strong>Tsinghua University, Shanda AI, Johns Hopkins University</strong><br>
<strong><sup>*</sup>Equal contribution</strong><br>
<strong>Completed during Kai Li's internship at Shanda AI.</strong><br>
<a href="#">π Arxiv 2026</a> | <a href="https://shandaai.github.io/Hive/">πΆ Demo</a>
</p>
## Usage
```python
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("ShandaAI/Hive")
# Load specific split
train_data = load_dataset("ShandaAI/Hive", split="train")
# Streaming mode (recommended for large datasets)
dataset = load_dataset("ShandaAI/Hive", streaming=True)
```
## π Dataset Description
**Hive** is a high-quality synthetic dataset designed for **Universal Sound Separation (USS)**. Unlike traditional methods relying on weakly-labeled in-the-wild data, Hive leverages an automated data collection pipeline to mine high-purity single-event segments from complex acoustic environments and synthesizes mixtures with semantically consistent constraints.
### Key Features
- **Purity over Scale**: 2.4k hours achieving competitive performance with million-hour baselines (~0.2% data scale)
- **Single-label Clean Supervision**: Rigorous semantic-acoustic alignment eliminating co-occurrence noise
- **Semantically Consistent Mixing**: Logic-based co-occurrence matrix ensuring realistic acoustic scenes
- **High Fidelity**: 44.1kHz sample rate for high-quality audio
### Dataset Scale
| Metric | Value |
|--------|-------|
| **Training Set Raw Audio** | 2,442 hours |
| **Val & Test Set Raw Audio** | 292 hours |
| **Mixed Samples** | 19.6M mixtures |
| **Total Mixed Duration** | ~22.4k hours |
| **Label Categories** | 283 classes |
| **Sample Rate** | 44.1 kHz |
| **Training Sample Duration** | 4 seconds |
| **Test Sample Duration** | 10 seconds |
### Dataset Splits
| Split | Samples | Description |
|-------|---------|-------------|
| Train | 17.5M | Training mixtures (4s duration) |
| Validation | 1.75M | Validation mixtures |
| Test | 350k | Test mixtures (10s duration) |
---
## π Dataset Structure
### Directory Organization
```
hive-datasets-parquet/
βββ README.md
βββ train/
β βββ data.parquet
βββ validation/
β βββ data.parquet
βββ test/
βββ data.parquet
```
Each split contains a single Parquet file with all mixture metadata. The `num_sources` field indicates the number of sources (2-5) for each mixture.
---
## π Data Fields
### JSON Schema
Each JSON object contains complete generation parameters for reproducing a mixture sample:
```python
{
"mix_id": "sample_00000003",
"split": "train",
"sample_rate": 44100,
"target_duration": 4.0,
"num_sources": 2,
"sources": {
"source_id": ["s1", "s2"],
"path": ["relative/path/to/audio1", "relative/path/to/audio2"],
"label": ["Ocean", "Rain"],
"crop_start_second": [1.396, 2.5],
"crop_end_second": [5.396, 6.5],
"chunk_start_second": [35.0, 20.0],
"chunk_end_second": [45.0, 30.0],
"rms_gain": [3.546, 2.1],
"snr_db": [0.0, -3.0],
"applied_weight": [3.546, 1.487]
},
"global_normalization_factor": 0.786,
"final_max_amplitude": 0.95
}
```
### Field Descriptions
#### 1. Basic Info Fields
| Field | Type | Description |
|-------|------|-------------|
| `mix_id` | string | Unique identifier for the mixture task |
| `split` | string | Dataset partition (`train` / `validation` / `test`) |
| `sample_rate` | int32 | Audio sample rate in Hz (44100) |
| `target_duration` | float64 | Target duration in seconds (4.0 for train, 10.0 for test) |
| `num_sources` | int32 | Number of audio sources in this mixture (2-5) |
#### 2. Source Information (`sources`)
Metadata required to reproduce the mixing process for each audio source. Stored in columnar format (dict of lists) for efficient Parquet storage:
| Field | Type | Description |
|-------|------|-------------|
| `source_id` | list[string] | Source identifiers (`s1`, `s2`, ...) |
| `path` | list[string] | Relative paths to the source audio files |
| `label` | list[string] | AudioSet ontology labels for each source |
| `chunk_start_second` | list[float64] | Start times (seconds) for reading from original audio files |
| `chunk_end_second` | list[float64] | End times (seconds) for reading from original audio files |
| `crop_start_second` | list[float64] | Precise start positions (seconds) for reproducible random extraction |
| `crop_end_second` | list[float64] | Precise end positions (seconds) for reproducible random extraction |
| `rms_gain` | list[float64] | Energy normalization coefficients: $\text{target\_rms} / \text{current\_rms}$ |
| `snr_db` | list[float64] | Signal-to-noise ratios in dB assigned to each source |
| `applied_weight` | list[float64] | Final scaling weights: $\text{rms\_gain} \times 10^{(\text{snr\_db} / 20)}$ |
#### 3. Mixing Parameters
Global processing parameters after combining multiple audio sources:
| Field | Type | Description |
|-------|------|-------------|
| `global_normalization_factor` | float64 | Anti-clipping scaling coefficient: $0.95 / \text{max\_val}$ |
| `final_max_amplitude` | float64 | Maximum amplitude threshold (0.95) to prevent bit-depth overflow |
### Detailed Field Explanations
#### Cropping Logic
- `chunk_start/end_second`: Defines the reading interval from the original audio file
- `crop_start/end_second`: Records the precise random cropping position, ensuring exact reproducibility across runs
#### Energy Normalization (`rms_gain`)
Adjusts different audio sources to the same energy level:
$$\text{rms\_gain} = \frac{\text{target\_rms}}{\text{current\_rms}}$$
#### Signal-to-Noise Ratio (`snr_db`)
The SNR value assigned to each source, sampled from a predefined range using `random.uniform(snr_range[0], snr_range[1])`.
#### Applied Weight
The comprehensive scaling weight combining energy normalization and SNR adjustment:
$$\text{applied\_weight} = \text{rms\_gain} \times 10^{(\text{snr\_db} / 20)}$$
This is the final coefficient applied to the original waveform.
#### Global Normalization Factor
Prevents audio clipping after mixing:
$$\text{global\_normalization\_factor} = \frac{0.95}{\text{max\_val}}$$
Where `max_val` is the **peak amplitude (absolute value)** of the mixed signal.
---
## π§ Usage
### Download Metadata
```python
from datasets import load_dataset
# Load specific split and mixture type
dataset = load_dataset("ShandaAI/Hive", split="train")
```
### Generate Mixed Audio
Please refer to the [official GitHub repository](https://github.com/ShandaAI/Hive) for the complete audio generation pipeline.
```bash
# Clone the repository
git clone https://github.com/ShandaAI/Hive.git
cd Hive/hive_dataset
# Generate mixtures from metadata
python mix_from_metadata/mix_from_metadata.py \
--metadata_dir /path/to/downloaded/metadata \
--output_dir ./hive_dataset \
--dataset_paths dataset_paths.json \
--num_processes 16
```
---
## π Source Datasets
Hive integrates **12 public datasets** to construct a long-tailed acoustic space:
| # | Dataset | Clips | Duration (h) | License |
|---|---------|-------|--------------|---------|
| 1 | BBC Sound Effects | 369,603 | 1,020.62 | Remix License |
| 2 | AudioSet | 326,890 | 896.61 | CC BY |
| 3 | VGGSound | 115,191 | 319.10 | CC BY 4.0 |
| 4 | MUSIC21 | 32,701 | 90.28 | YouTube Standard |
| 5 | FreeSound | 17,451 | 46.90 | CC0/BY/BY-NC |
| 6 | ClothoV2 | 14,759 | 38.19 | Non-Commercial Research |
| 7 | Voicebank-DEMAND | 12,376 | 9.94 | CC BY 4.0 |
| 8 | AVE | 3,054 | 6.91 | CC BY-NC-SA |
| 9 | SoundBible | 2,501 | 5.78 | CC BY 4.0 |
| 10 | DCASE | 1,969 | 5.46 | Academic Use |
| 11 | ESC50 | 1,433 | 1.99 | CC BY-NC 3.0 |
| 12 | FSD50K | 636 | 0.80 | Creative Commons |
| | **Total** | **898,564** | **2,442.60** | |
**Important Note**: This repository releases only **metadata** (JSON files containing mixing parameters and source references) for reproducibility. Users must independently download and prepare the source datasets according to their respective licenses.
---
## π Citation
If you use this dataset, please cite:
```bibtex
```
---
## βοΈ License
This dataset metadata is released under the **Apache License 2.0**.
Please note that the source audio files are subject to their original licenses. Users must comply with the respective licenses when using the source datasets.
---
## π Acknowledgments
We extend our gratitude to the researchers and organizations who curated the foundational datasets that made Hive possible:
- **BBC Sound Effects** - Professional-grade recordings with broadcast-level fidelity
- **AudioSet** (Google) - Large-scale audio benchmark
- **VGGSound** (University of Oxford) - Real-world acoustic diversity
- **FreeSound** (MTG-UPF) - Rich crowdsourced soundscapes
- And all other contributing datasets
---
## π¬ Contact
For questions or issues, please open an issue on the [GitHub repository](https://github.com/ShandaAI/Hive) or contact the authors.
|