Voice-Note-Audio / README.md
danielrosehill's picture
Add schema versioning documentation to README
36573db
|
raw
history blame
8.74 kB
metadata
task_categories:
  - automatic-speech-recognition
language:
  - en
pretty_name: Voice Note Audio Dataset
size_categories:
  - n<1K
tags:
  - speech-to-text
  - noise-robustness
  - evaluation
  - whisper
  - real-world-audio
  - voice-notes
license: mit

Voice Note Audio Dataset

A curated dataset of real-world voice notes collected by Daniel Rosehill, primarily recorded in and around Jerusalem, Israel. This dataset captures authentic voice recordings in diverse acoustic environments and formats, reflecting typical daily usage patterns with speech-to-text transcription applications.

Current Status: 190+ annotated voice notes with comprehensive metadata

Dataset Overview

This dataset is part of a larger voice note training collection being curated for STT fine-tuning, entity recognition, and real-world speech recognition evaluation. Unlike studio-quality audio commonly used in speech recognition training, these recordings intentionally include the challenges present in everyday voice note usage:

  • Variable background noise (traffic, conversations, music)
  • Different recording environments (indoor, outdoor, vehicles)
  • Multiple microphone types and Bluetooth codecs
  • Natural speaking patterns and multilingual content
  • Real-world audio quality variations

Key Features

Comprehensive Annotations

Each voice note includes rich metadata stored in JSON format:

  • Audio Metadata: Duration, bitrate, sample rate, file format, codec information
  • Transcripts: AI-generated (uncorrected) and manually corrected ground truth versions
  • Text Metrics: Word count, character count, lexical diversity, WPM (words per minute)
  • Quality Ratings: Audio quality assessments, noise type classification
  • Environmental Context: Recording location, time of day, background conditions
  • Content Classification: Note type (email draft, to-do, idea, meeting note, etc.)
  • Language Information: Primary language, multilingual indicators, mixed-language notes
  • Technical Details: Microphone type, Bluetooth codec, recording device

Dataset Statistics

The repository includes an auto-generated STATS.md file with comprehensive metrics:

  • Total audio duration and word count across all recordings
  • Average duration and word count per note
  • Dataset completeness percentages (transcripts, corrections, annotations)
  • Character counts and text complexity metrics

Statistics are automatically updated when new recordings are added to the dataset.

Data Organization

Voice-Note-Audio/
├── audio/                          # Audio files (MP3, WAV, M4A, OGG)
├── transcripts/
│   ├── uncorrected/               # AI-generated transcripts from STT
│   └── ground_truths/             # Manually corrected transcripts
├── annotations/                    # JSON metadata for each recording
├── schema/                         # Annotation schema (versioned)
│   ├── annotation_schema_v1.json  # Schema definition v1.0.0
│   ├── README.md                  # Schema documentation
│   └── CHANGELOG.md               # Version history
├── STATS.md                       # Auto-generated dataset statistics
└── README.md                      # This file

Files are numbered sequentially (e.g., 1.mp3, 1.txt, 1.json) for easy cross-referencing.

Dataset Management

This dataset is actively managed using a custom Hugging Face Space application: Voice Note Dataset Manager

The management interface provides:

  • Quick upload functionality with batch processing
  • Automated metadata extraction and calculation
  • Real-time statistics tracking and visualization
  • Browse, edit, and delete capabilities
  • Comprehensive annotation support
  • Automatic stats file generation

Use Cases

1. STT Model Fine-Tuning

Train and evaluate speech recognition models on real-world voice notes with natural noise and speaking patterns, improving accuracy for everyday recording conditions.

2. Noise Robustness Evaluation

Benchmark STT systems against various background noise types and acoustic challenges commonly encountered in voice note applications.

3. Entity Recognition Development

Develop specialized NER (Named Entity Recognition) models for voice notes to identify dates, names, locations, organizations, and other entities in spoken content.

4. Voice Note Classification

Train models to automatically categorize voice notes by type (to-do items, meeting notes, ideas, etc.) based on audio characteristics and content.

5. Multilingual Speech Research

Study code-switching and multilingual speech patterns in authentic voice recordings containing mixed English, Hebrew, and other languages.

Annotation Schema

The dataset uses a comprehensive, versioned annotation schema to ensure consistency and enable schema evolution over time.

Current Schema Version: 1.0.0 (Released: 2025-10-26)

Schema Versioning

The annotation schema follows Semantic Versioning (MAJOR.MINOR.PATCH):

  • MAJOR: Incompatible schema changes
  • MINOR: Backward-compatible additions
  • PATCH: Backward-compatible bug fixes

Each annotation automatically includes a schema_version field, enabling:

  • Tracking which schema version was used for each annotation
  • Backward compatibility as the schema evolves
  • Migration paths when schema updates occur
  • Historical analysis of annotation practices

Schema files and documentation are maintained in the schema/ directory:

  • annotation_schema_v1.json - Current schema definition
  • README.md - Schema usage and documentation
  • CHANGELOG.md - Version history and changes

Schema Coverage

Classification (31 categories)

Comprehensive note type classification including:

  • Communication: Email drafts, replies
  • Task Management: To-do lists, reminders, shopping lists
  • Content Creation: Blog posts, articles, social media, scripts, presentations
  • Development: Prompts (general, development, creative), documentation, code comments, bug reports, feature requests
  • Personal & Professional: Journal entries, memos, ideas, meeting notes, research notes, project planning
  • General: Questions, other

Audio Defects (10 categories)

Real-world audio challenges for STT evaluation:

  • Background noise, music, conversations
  • Crying baby, traffic sounds
  • Poor quality (distortion, clipping)
  • Multiple speakers, wind noise, echo
  • Phone ringing/notifications

Content Issues (5 categories)

Recording-level problems:

  • Side conversations, partial content
  • False starts, thinking aloud
  • Self-correction during recording

Languages (7 supported)

Multi-language support for:

  • English, Hebrew, Arabic
  • Russian, French, Spanish, German

Transcription Quality (5 levels)

STT output assessment:

  • Excellent, Good, Fair, Poor, Unusable

Additional Metadata

  • Audio Quality Indicators: Quality ratings, noise types, environmental factors
  • Technical Specifications: Microphone types, Bluetooth codecs, audio formats
  • Text Analysis: Word/character counts, lexical diversity, speaking rate (WPM)
  • Context: Recording location, time of day, multi-language indicators

See schema/README.md and schema/CHANGELOG.md for complete schema documentation and version history.

Recording Equipment

Voice notes were captured using:

  • OnePlus Nord 3: Internal microphone (primary device)
  • Poly 5200: Bluetooth headset microphone
  • ATR 4697: Professional wired microphone

Various Bluetooth codecs documented in metadata when applicable.

Dataset Growth

This is an actively growing dataset. New voice notes are continuously added with full annotations and metadata. Check STATS.md in the repository for current dataset size and metrics.

Citation

If you use this dataset in your research, please cite:

@dataset{rosehill_voicenote_2024,
  author = {Rosehill, Daniel},
  title = {Voice Note Audio Dataset},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/danielrosehill/Voice-Note-Audio}}
}

License

This dataset is released under the MIT License, allowing for both commercial and non-commercial use with attribution.

Contact

Daniel Rosehill

Acknowledgments

AI transcripts provided by Voicenotes.com, serving as baseline uncorrected transcripts for comparison with ground truth corrections.

Dataset management interface built using Gradio and Hugging Face Spaces.