Datasets:
license: cc
configs:
- config_name: unprocessed
data_files:
- split: train
path: parquet/fuss_unprocessed/train/*.parquet
- split: validation
path: parquet/fuss_unprocessed/validation/*.parquet
- split: test
path: parquet/fuss_unprocessed/test/*.parquet
- config_name: reverberant
data_files:
- split: train
path: parquet/fuss_reverberant/train/*.parquet
- split: validation
path: parquet/fuss_reverberant/validation/*.parquet
- split: test
path: parquet/fuss_reverberant/test/*.parquet
task_categories:
- audio-to-audio
language:
- en
pretty_name: fuss
FUSS Parquet Dataset
This dataset provides the Free Universal Sound Separation (FUSS) Dataset as a set of parquet files.
The Free Universal Sound Separation (FUSS) Dataset is a database of arbitrary sound mixtures and source-level references, for use in experiments on arbitrary sound separation.
This is the official sound separation data for the DCASE2020 Challenge Task 4: Sound Event Detection and Separation in Domestic Environments.
Overview: FUSS audio data is sourced from a pre-release of Freesound dataset known as (FSD50k), a sound event dataset composed of Freesound content annotated with labels from the AudioSet Ontology. Using the FSD50K labels, these source files have been screened such that they likely only contain a single type of sound. Labels are not provided for these source files, and are not considered part of the challenge. For the purpose of the DCASE Task4 Sound Separation and Event Detection challenge, systems should not use FSD50K labels, even though they may become available upon FSD50K release.
To create mixtures, 10 second clips of sources are convolved with simulated room impulse responses and added together. Each 10 second mixture contains between 1 and 4 sources. Source files longer than 10 seconds are considered "background" sources. Every mixture contains one background source, which is active for the entire duration. We provide: a software recipe to create the dataset, the room impulse responses, and the original source audio.
- Reverberant (the default config) contains audio where the individual source sounds have been convolved with simulated room impulse responses (RIRs) before being mixed together. To create the mixtures, 10-second clips of sources are convolved with simulated room impulse responses and then added together. tensorflow
- Unprocessed contains the raw audio mixed together without that room simulation step — just the dry source signals combined directly.
What reverberation means in audio
When you make a sound in a real room, you don't just hear the direct sound — you also hear it bouncing off the walls, ceiling, floor, and furniture. Those reflections arrive at your ears slightly delayed and attenuated, creating what we perceive as the "room sound" or reverb. A large cathedral has long, dramatic reverb; a small padded closet has almost none.
A room impulse response captures this behavior mathematically — it's essentially a recording of how a specific room transforms a short impulse (like a clap). When you convolve a dry audio signal with an RIR, you're mathematically simulating what that sound would sound like if it were actually played in that room.
So the reverberant version of FUSS is designed to be more realistic — it mimics what a microphone would actually pick up in a real indoor environment, where multiple sound sources are all bouncing off surfaces. The unprocessed version gives you the "clean" version without that spatial complexity, which can be useful as a baseline or for different experimental setups.
https://www.tensorflow.org/datasets/catalog/fuss
Variants and splits
unprocessed: train/validation/testreverberant: train/validation/test
Observed file inventory
fuss_unprocessed/train: 313 parquet filesfuss_unprocessed/validation: 16 parquet filesfuss_unprocessed/test: 16 parquet filesfuss_reverberant/train: 313 parquet filesfuss_reverberant/validation: 16 parquet filesfuss_reverberant/test: 16 parquet files
Total: 691 parquet files, approximately 34.06 GB.
Example usage
from datasets import load_dataset
ds = load_dataset("scaleinvariant/fuss-parquet", "reverberant", split="train", streaming=True)
print(next(iter(ds)).keys())
License
The Free Universal Sound Separation (FUSS) Data as a whole is released under license: Attribution 4.0 International (CC BY 4.0).
You can find a human-readable summary of the license at:
https://creativecommons.org/licenses/by/4.0/
For convenience, the human-readable summary is included next:
You are free to:
Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially.
The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
================================================================================
However, the human-readable summary included above is not a substitute for the license, which is described in detail at: