The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 246, in _split_generators
raise ValueError(
ValueError: `file_name`, `*_file_name`, `file_names` or `*_file_names` must be present as dictionary key in metadata files
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Noisy Voice Notes
A small, hand-curated set of real-world voice notes recorded over roughly a year by a single speaker (me — Daniel Rosehill), captured in everyday environments rather than a studio. Most clips contain meaningful background noise: traffic, café chatter, kitchens, public transport, wind, kids, etc. They are deliberately not clean.
The dataset is intended as a small evaluation / probing set, not a training corpus. Each clip ships with the original transcript plus a layer of personal annotations.
What it is useful for
Three use cases drove the curation:
- Background-noise / denoising evaluation — clips with realistic, varied noise floors and known DNSMOS-style quality scores, to probe how denoisers and noise-suppression models behave on consumer-grade voice notes.
- ASR evaluation on imperfect material — speech-to-text systems are typically benchmarked on relatively clean speech. These clips let you measure degradation on the kind of audio people actually capture on phones, with a human transcript already in place for reference.
- Voice-note classification — the speaker's own annotation of what each note is (to-do list, note-to-self, diary entry, email draft, blog idea, podcast prompt, outline, etc.) is the basis for a personal classification project. The same labels can be used by anyone exploring few-shot or fine-tuned classifiers over short spoken-intent audio.
Source
All recordings come from my own personal archive on voicenotes.com, a voice-note app I've been using and recommend. Transcripts in this dataset are the ones generated by voicenotes.com's pipeline (ElevenLabs Scribe under the hood, at the time of capture) — they have not been hand-corrected. They reflect what a strong commercial ASR system produced on this audio, which is itself part of what makes the dataset useful for ASR comparisons.
Audio is released as MP3 in its original form (no denoising, no normalisation, no trimming).
Layout
audio/ # <id>.mp3
transcripts/ # <id>.md (original voicenotes.com transcript)
metadata.csv # one row per clip
Each row in metadata.csv joins to audio/<id>.mp3 and transcripts/<id>.md via audio_relpath / transcript_relpath.
Schema (selected columns)
The full column-by-column reference lives in DATA_DICTIONARY.md. The summary below is a quick orientation.
| Column | Meaning |
|---|---|
id, uuid |
stable identifier |
title, speaker, recorded_at, duration_s |
basic metadata (speaker is always Daniel Rosehill) |
BAK, SIG, OVRL |
DNSMOS P.835 scores (1–5; lower BAK = noisier background) |
noise_level |
bucketed BAK |
audio_quality_rating |
1–5 star human rating |
audio_defects |
list (clipping, wind, distortion, etc.) |
languages |
spoken languages present |
hebrew_usage, background_languages |
code-switch / ambient-language flags |
non_intended_audio |
TV, music, other speakers, etc. |
note_types_multi, note_categories_multi |
speaker-assigned type/category labels |
subject_matter |
short topic descriptor |
mwp_prompt |
boolean — speaker's "Morning Writing Prompt" tag |
transcription_quality |
human judgement of the source transcript |
microphone, capture_location |
capture context (location supports freehand entries) |
| acoustic features | rms_dbfs, peak_dbfs, crest_factor_db, clipping_ratio, silence_ratio, speech_ratio, snr_db_estimate, spectral & ZCR features, HNR proxy |
| transcript stats | transcript_chars, transcript_words, wpm, active_wpm |
DNSMOS is reference-free, run from the Microsoft ONNX models locally; it is a proxy for perceived quality, not ground truth.
Annotation status
Annotation is in two stages, and the dataset card will be updated as the schema stabilises:
- Release validation (current). Each candidate clip is reviewed against my voicenotes.com archive to confirm it is OK to release publicly. Clips that touch on private people, sensitive topics, or PII flagged by an automated screen are withheld. The corpus on disk is therefore deliberately a subset of what I've recorded.
- Background-noise triggers (next). A second annotation pass will tag the type of background noise present (traffic, café, kitchen, wind, HVAC, music, other speakers, etc.) so the dataset can be sliced by acoustic environment. The schema for those tags is still firming up.
PII screening combines Microsoft Presidio + spaCy NER with keyword heuristics for medical / financial / relationship / credential mentions. It is conservative — false positives are dropped from the public set rather than risk releasing something I shouldn't.
What this is not
- Not a clean-speech corpus.
- Not multi-speaker — it's one speaker, one accent, one set of recording habits.
- Not a hand-corrected transcript benchmark — the transcripts are the ASR system's output, kept as-is.
- Not large. It is a probing / evaluation set; treat it accordingly.
Citation / attribution
If you use this dataset, please cite or link back to the Hugging Face page (danielrosehill/Noisy-Voice-Notes). Released under CC-BY-4.0.
Contact
Issues, corrections, or "please remove this clip" requests: open an issue on the Hugging Face dataset page or contact daniel@danielrosehill.co.il.
- Downloads last month
- 8