Hive / README.md
JusperLee's picture
Update README.md
862fc7e verified
metadata
license: apache-2.0
task_categories:
  - audio-classification
  - audio-to-audio
language:
  - en
tags:
  - audio
  - sound-separation
  - universal-sound-separation
  - audio-mixing
  - audioset
pretty_name: Hive Dataset
size_categories:
  - 10M<n<100M
dataset_info:
  features:
    - name: mix_id
      dtype: string
    - name: split
      dtype: string
    - name: sample_rate
      dtype: int32
    - name: target_duration
      dtype: float64
    - name: num_sources
      dtype: int32
    - name: sources
      sequence:
        - name: source_id
          dtype: string
        - name: path
          dtype: string
        - name: label
          dtype: string
        - name: crop_start_second
          dtype: float64
        - name: crop_end_second
          dtype: float64
        - name: chunk_start_second
          dtype: float64
        - name: chunk_end_second
          dtype: float64
        - name: rms_gain
          dtype: float64
        - name: snr_db
          dtype: float64
        - name: applied_weight
          dtype: float64
    - name: global_normalization_factor
      dtype: float64
    - name: final_max_amplitude
      dtype: float64
  splits:
    - name: train
      num_examples: 5000000
    - name: validation
      num_examples: 500000
    - name: test
      num_examples: 100000

A Semantically Consistent Dataset for Data-Efficient Query-Based Universal Sound Separation

Logo

Kai Li*, Jintao Cheng*, Chang Zeng, Zijun Yan, Helin Wang, Zixiong Su, Bo Zheng, Xiaolin Hu
Tsinghua University, Shanda AI, Johns Hopkins University
*Equal contribution
Completed during Kai Li's internship at Shanda AI.
πŸ“œ Arxiv 2026 | 🎢 Demo

Usage

from datasets import load_dataset

# Load full dataset
dataset = load_dataset("ShandaAI/Hive")

# Load specific split
train_data = load_dataset("ShandaAI/Hive", split="train")

# Streaming mode (recommended for large datasets)
dataset = load_dataset("ShandaAI/Hive", streaming=True)

πŸ“„ Dataset Description

Hive is a high-quality synthetic dataset designed for Universal Sound Separation (USS). Unlike traditional methods relying on weakly-labeled in-the-wild data, Hive leverages an automated data collection pipeline to mine high-purity single-event segments from complex acoustic environments and synthesizes mixtures with semantically consistent constraints.

Key Features

  • Purity over Scale: 2.4k hours achieving competitive performance with million-hour baselines (~0.2% data scale)
  • Single-label Clean Supervision: Rigorous semantic-acoustic alignment eliminating co-occurrence noise
  • Semantically Consistent Mixing: Logic-based co-occurrence matrix ensuring realistic acoustic scenes
  • High Fidelity: 44.1kHz sample rate for high-quality audio

Dataset Scale

Metric Value
Training Set Raw Audio 2,442 hours
Val & Test Set Raw Audio 292 hours
Mixed Samples 19.6M mixtures
Total Mixed Duration ~22.4k hours
Label Categories 283 classes
Sample Rate 44.1 kHz
Training Sample Duration 4 seconds
Test Sample Duration 10 seconds

Dataset Splits

Split Samples Description
Train 17.5M Training mixtures (4s duration)
Validation 1.75M Validation mixtures
Test 350k Test mixtures (10s duration)

πŸ“‚ Dataset Structure

Directory Organization

hive-datasets-parquet/
β”œβ”€β”€ README.md
β”œβ”€β”€ train/
β”‚   └── data.parquet
β”œβ”€β”€ validation/
β”‚   └── data.parquet
└── test/
    └── data.parquet

Each split contains a single Parquet file with all mixture metadata. The num_sources field indicates the number of sources (2-5) for each mixture.


πŸ“‹ Data Fields

JSON Schema

Each JSON object contains complete generation parameters for reproducing a mixture sample:

{
    "mix_id": "sample_00000003",
    "split": "train",
    "sample_rate": 44100,
    "target_duration": 4.0,
    "num_sources": 2,
    "sources": {
        "source_id": ["s1", "s2"],
        "path": ["relative/path/to/audio1", "relative/path/to/audio2"],
        "label": ["Ocean", "Rain"],
        "crop_start_second": [1.396, 2.5],
        "crop_end_second": [5.396, 6.5],
        "chunk_start_second": [35.0, 20.0],
        "chunk_end_second": [45.0, 30.0],
        "rms_gain": [3.546, 2.1],
        "snr_db": [0.0, -3.0],
        "applied_weight": [3.546, 1.487]
    },
    "global_normalization_factor": 0.786,
    "final_max_amplitude": 0.95
}

Field Descriptions

1. Basic Info Fields

Field Type Description
mix_id string Unique identifier for the mixture task
split string Dataset partition (train / validation / test)
sample_rate int32 Audio sample rate in Hz (44100)
target_duration float64 Target duration in seconds (4.0 for train, 10.0 for test)
num_sources int32 Number of audio sources in this mixture (2-5)

2. Source Information (sources)

Metadata required to reproduce the mixing process for each audio source. Stored in columnar format (dict of lists) for efficient Parquet storage:

Field Type Description
source_id list[string] Source identifiers (s1, s2, ...)
path list[string] Relative paths to the source audio files
label list[string] AudioSet ontology labels for each source
chunk_start_second list[float64] Start times (seconds) for reading from original audio files
chunk_end_second list[float64] End times (seconds) for reading from original audio files
crop_start_second list[float64] Precise start positions (seconds) for reproducible random extraction
crop_end_second list[float64] Precise end positions (seconds) for reproducible random extraction
rms_gain list[float64] Energy normalization coefficients: $\text{target_rms} / \text{current_rms}$
snr_db list[float64] Signal-to-noise ratios in dB assigned to each source
applied_weight list[float64] Final scaling weights: $\text{rms_gain} \times 10^{(\text{snr_db} / 20)}$

3. Mixing Parameters

Global processing parameters after combining multiple audio sources:

Field Type Description
global_normalization_factor float64 Anti-clipping scaling coefficient: $0.95 / \text{max_val}$
final_max_amplitude float64 Maximum amplitude threshold (0.95) to prevent bit-depth overflow

Detailed Field Explanations

Cropping Logic

  • chunk_start/end_second: Defines the reading interval from the original audio file
  • crop_start/end_second: Records the precise random cropping position, ensuring exact reproducibility across runs

Energy Normalization (rms_gain)

Adjusts different audio sources to the same energy level: rms_gain=target_rmscurrent_rms\text{rms\_gain} = \frac{\text{target\_rms}}{\text{current\_rms}}

Signal-to-Noise Ratio (snr_db)

The SNR value assigned to each source, sampled from a predefined range using random.uniform(snr_range[0], snr_range[1]).

Applied Weight

The comprehensive scaling weight combining energy normalization and SNR adjustment: applied_weight=rms_gainΓ—10(snr_db/20)\text{applied\_weight} = \text{rms\_gain} \times 10^{(\text{snr\_db} / 20)}

This is the final coefficient applied to the original waveform.

Global Normalization Factor

Prevents audio clipping after mixing: global_normalization_factor=0.95max_val\text{global\_normalization\_factor} = \frac{0.95}{\text{max\_val}}

Where max_val is the peak amplitude (absolute value) of the mixed signal.


πŸ”§ Usage

Download Metadata

from datasets import load_dataset

# Load specific split and mixture type
dataset = load_dataset("ShandaAI/Hive", split="train")

Generate Mixed Audio

Please refer to the official GitHub repository for the complete audio generation pipeline.

# Clone the repository
git clone https://github.com/ShandaAI/Hive.git
cd Hive/hive_dataset

# Generate mixtures from metadata
python mix_from_metadata/mix_from_metadata.py \
    --metadata_dir /path/to/downloaded/metadata \
    --output_dir ./hive_dataset \
    --dataset_paths dataset_paths.json \
    --num_processes 16

πŸ“š Source Datasets

Hive integrates 12 public datasets to construct a long-tailed acoustic space:

# Dataset Clips Duration (h) License
1 BBC Sound Effects 369,603 1,020.62 Remix License
2 AudioSet 326,890 896.61 CC BY
3 VGGSound 115,191 319.10 CC BY 4.0
4 MUSIC21 32,701 90.28 YouTube Standard
5 FreeSound 17,451 46.90 CC0/BY/BY-NC
6 ClothoV2 14,759 38.19 Non-Commercial Research
7 Voicebank-DEMAND 12,376 9.94 CC BY 4.0
8 AVE 3,054 6.91 CC BY-NC-SA
9 SoundBible 2,501 5.78 CC BY 4.0
10 DCASE 1,969 5.46 Academic Use
11 ESC50 1,433 1.99 CC BY-NC 3.0
12 FSD50K 636 0.80 Creative Commons
Total 898,564 2,442.60

Important Note: This repository releases only metadata (JSON files containing mixing parameters and source references) for reproducibility. Users must independently download and prepare the source datasets according to their respective licenses.


πŸ“– Citation

If you use this dataset, please cite:



βš–οΈ License

This dataset metadata is released under the Apache License 2.0.

Please note that the source audio files are subject to their original licenses. Users must comply with the respective licenses when using the source datasets.


πŸ™ Acknowledgments

We extend our gratitude to the researchers and organizations who curated the foundational datasets that made Hive possible:

  • BBC Sound Effects - Professional-grade recordings with broadcast-level fidelity
  • AudioSet (Google) - Large-scale audio benchmark
  • VGGSound (University of Oxford) - Real-world acoustic diversity
  • FreeSound (MTG-UPF) - Rich crowdsourced soundscapes
  • And all other contributing datasets

πŸ“¬ Contact

For questions or issues, please open an issue on the GitHub repository or contact the authors.