Update dataset README with comprehensive documentation
Browse filesRewrote the README to reflect the current state of the dataset and provide
comprehensive documentation for users and researchers.
Changes:
- Updated dataset status (190+ annotated voice notes)
- Added detailed feature descriptions and use cases
- Documented the Voice Note Dataset Manager interface
- Included comprehensive annotation schema overview
- Added auto-generated statistics file information
- Improved data organization documentation
- Added citation format and contact information
- Enhanced metadata descriptions (audio, text, technical specs)
- Clarified multilingual and real-world audio aspects
- Added dataset growth and management information
The README now accurately reflects the dataset's capabilities, management
tools, and research applications.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
|
@@ -3,7 +3,7 @@ task_categories:
|
|
| 3 |
- automatic-speech-recognition
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
-
pretty_name: "Voice Note Audio"
|
| 7 |
size_categories:
|
| 8 |
- "n<1K"
|
| 9 |
tags:
|
|
@@ -11,122 +11,166 @@ tags:
|
|
| 11 |
- noise-robustness
|
| 12 |
- evaluation
|
| 13 |
- whisper
|
|
|
|
|
|
|
| 14 |
license: mit
|
| 15 |
---
|
| 16 |
|
| 17 |
-
# Voice
|
| 18 |
-
|
| 19 |
-
A dataset of voice notes collected by Daniel Rosehill in and around Jerusalem
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
##
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
-
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
-
|
| 41 |
-
-
|
| 42 |
-
-
|
| 43 |
-
-
|
| 44 |
-
- **
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
-
|
| 54 |
-
-
|
| 55 |
-
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
### Multilingual Transcript
|
| 60 |
-
Single selection to indicate if the transcript contains multiple languages:
|
| 61 |
-
- **True**: Transcript contains multiple languages
|
| 62 |
-
- **False**: Transcript is in a single language
|
| 63 |
-
|
| 64 |
-
### Entities Present in Note
|
| 65 |
-
Multiple selection for identifying named entities mentioned in the voice note:
|
| 66 |
-
- **Dates**: Specific dates or time references
|
| 67 |
-
- **Persons**: Names of people
|
| 68 |
-
- **Placenames**: Geographic locations or places
|
| 69 |
-
- **Email Addresses**: Email addresses mentioned
|
| 70 |
-
- **Blog Title**: Blog or article titles
|
| 71 |
-
- **Acronym**: Acronyms or abbreviations
|
| 72 |
-
- **Organisations**: Company or organization names
|
| 73 |
-
|
| 74 |
-
### Bluetooth Codec
|
| 75 |
-
Single selection for identifying the Bluetooth codec used during recording:
|
| 76 |
-
- **SBC**: Standard Bluetooth codec
|
| 77 |
-
- **AAC**: Advanced Audio Coding
|
| 78 |
-
- **aptX**: Qualcomm aptX codec
|
| 79 |
-
- **aptX HD**: High-definition aptX codec
|
| 80 |
-
- **LDAC**: Sony LDAC high-quality codec
|
| 81 |
-
- **LC3**: Low Complexity Communication Codec
|
| 82 |
-
- **N/A**: Not applicable (wired/internal mic)
|
| 83 |
-
- **Unknown**: Codec information unavailable
|
| 84 |
-
|
| 85 |
-
## Microphones Used
|
| 86 |
-
|
| 87 |
-
The voice notes in this dataset were recorded using various microphones:
|
| 88 |
-
- **OnePlus Nord 3 Internal Microphone**: Built-in phone microphone
|
| 89 |
-
- **Poly 5200**: Bluetooth-connected microphone
|
| 90 |
-
- **ATR 4697**: Professional microphone
|
| 91 |
|
| 92 |
## Data Organization
|
| 93 |
|
| 94 |
-
|
| 95 |
-
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
|
| 102 |
-
|
| 103 |
|
| 104 |
-
|
| 105 |
|
| 106 |
-
|
| 107 |
|
| 108 |
-
|
| 109 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 110 |
|
| 111 |
-
|
| 112 |
-
Develop a specialized model for the "Voice Router" application to classify and identify entities within voice note recordings, enabling intelligent routing and categorization of voice-based content.
|
| 113 |
|
| 114 |
-
|
| 115 |
-
Generate a comprehensive, open-source dataset with rich annotations for various audio recording conditions, enabling STT model evaluation across different acoustic environments and contributing to the broader speech recognition research community.
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
-
|
|
|
|
|
|
|
|
|
|
| 120 |
|
| 121 |
-
|
| 122 |
-
- `transcripts/`: Folder containing transcript files
|
| 123 |
-
- `uncorrected/`: Raw, AI-generated transcripts corresponding to the audio files
|
| 124 |
-
- `ground_truths/`: Manually corrected transcripts for training and evaluation
|
| 125 |
-
- `dataset_metadata.json`: Metadata associated with the dataset entries.
|
| 126 |
-
- `label_studio_config.xml`: Configuration file for Label Studio, an annotation tool.
|
| 127 |
-
- `setup_annotation.py`: Script to help set up the annotation process.
|
| 128 |
-
- `parameters.md`: A detailed list of parameters to be annotated for each voice note.
|
| 129 |
|
| 130 |
-
|
| 131 |
|
| 132 |
-
|
|
|
|
| 3 |
- automatic-speech-recognition
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
pretty_name: "Voice Note Audio Dataset"
|
| 7 |
size_categories:
|
| 8 |
- "n<1K"
|
| 9 |
tags:
|
|
|
|
| 11 |
- noise-robustness
|
| 12 |
- evaluation
|
| 13 |
- whisper
|
| 14 |
+
- real-world-audio
|
| 15 |
+
- voice-notes
|
| 16 |
license: mit
|
| 17 |
---
|
| 18 |
|
| 19 |
+
# Voice Note Audio Dataset
|
| 20 |
+
|
| 21 |
+
A curated dataset of real-world voice notes collected by Daniel Rosehill, primarily recorded in and around Jerusalem, Israel. This dataset captures authentic voice recordings in diverse acoustic environments and formats, reflecting typical daily usage patterns with speech-to-text transcription applications.
|
| 22 |
+
|
| 23 |
+
**Current Status:** 190+ annotated voice notes with comprehensive metadata
|
| 24 |
+
|
| 25 |
+
## Dataset Overview
|
| 26 |
+
|
| 27 |
+
This dataset is part of a larger voice note training collection being curated for STT fine-tuning, entity recognition, and real-world speech recognition evaluation. Unlike studio-quality audio commonly used in speech recognition training, these recordings intentionally include the challenges present in everyday voice note usage:
|
| 28 |
+
|
| 29 |
+
- Variable background noise (traffic, conversations, music)
|
| 30 |
+
- Different recording environments (indoor, outdoor, vehicles)
|
| 31 |
+
- Multiple microphone types and Bluetooth codecs
|
| 32 |
+
- Natural speaking patterns and multilingual content
|
| 33 |
+
- Real-world audio quality variations
|
| 34 |
+
|
| 35 |
+
## Key Features
|
| 36 |
+
|
| 37 |
+
### Comprehensive Annotations
|
| 38 |
+
|
| 39 |
+
Each voice note includes rich metadata stored in JSON format:
|
| 40 |
+
|
| 41 |
+
- **Audio Metadata**: Duration, bitrate, sample rate, file format, codec information
|
| 42 |
+
- **Transcripts**: AI-generated (uncorrected) and manually corrected ground truth versions
|
| 43 |
+
- **Text Metrics**: Word count, character count, lexical diversity, WPM (words per minute)
|
| 44 |
+
- **Quality Ratings**: Audio quality assessments, noise type classification
|
| 45 |
+
- **Environmental Context**: Recording location, time of day, background conditions
|
| 46 |
+
- **Content Classification**: Note type (email draft, to-do, idea, meeting note, etc.)
|
| 47 |
+
- **Language Information**: Primary language, multilingual indicators, mixed-language notes
|
| 48 |
+
- **Technical Details**: Microphone type, Bluetooth codec, recording device
|
| 49 |
+
|
| 50 |
+
### Dataset Statistics
|
| 51 |
+
|
| 52 |
+
The repository includes an auto-generated `STATS.md` file with comprehensive metrics:
|
| 53 |
+
|
| 54 |
+
- Total audio duration and word count across all recordings
|
| 55 |
+
- Average duration and word count per note
|
| 56 |
+
- Dataset completeness percentages (transcripts, corrections, annotations)
|
| 57 |
+
- Character counts and text complexity metrics
|
| 58 |
+
|
| 59 |
+
Statistics are automatically updated when new recordings are added to the dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
## Data Organization
|
| 62 |
|
| 63 |
+
```
|
| 64 |
+
Voice-Note-Audio/
|
| 65 |
+
├── audio/ # Audio files (MP3, WAV, M4A, OGG)
|
| 66 |
+
├── transcripts/
|
| 67 |
+
│ ├── uncorrected/ # AI-generated transcripts from STT
|
| 68 |
+
│ └── ground_truths/ # Manually corrected transcripts
|
| 69 |
+
├── annotations/ # JSON metadata for each recording
|
| 70 |
+
├── STATS.md # Auto-generated dataset statistics
|
| 71 |
+
├── ANNOTATION_SCHEMA.md # Detailed schema documentation
|
| 72 |
+
└── annotation_schema_v1.json # JSON schema specification
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
Files are numbered sequentially (e.g., `1.mp3`, `1.txt`, `1.json`) for easy cross-referencing.
|
| 76 |
+
|
| 77 |
+
## Dataset Management
|
| 78 |
+
|
| 79 |
+
This dataset is actively managed using a custom Hugging Face Space application: **Voice Note Dataset Manager**
|
| 80 |
+
|
| 81 |
+
The management interface provides:
|
| 82 |
+
- Quick upload functionality with batch processing
|
| 83 |
+
- Automated metadata extraction and calculation
|
| 84 |
+
- Real-time statistics tracking and visualization
|
| 85 |
+
- Browse, edit, and delete capabilities
|
| 86 |
+
- Comprehensive annotation support
|
| 87 |
+
- Automatic stats file generation
|
| 88 |
+
|
| 89 |
+
## Use Cases
|
| 90 |
+
|
| 91 |
+
### 1. STT Model Fine-Tuning
|
| 92 |
+
Train and evaluate speech recognition models on real-world voice notes with natural noise and speaking patterns, improving accuracy for everyday recording conditions.
|
| 93 |
+
|
| 94 |
+
### 2. Noise Robustness Evaluation
|
| 95 |
+
Benchmark STT systems against various background noise types and acoustic challenges commonly encountered in voice note applications.
|
| 96 |
+
|
| 97 |
+
### 3. Entity Recognition Development
|
| 98 |
+
Develop specialized NER (Named Entity Recognition) models for voice notes to identify dates, names, locations, organizations, and other entities in spoken content.
|
| 99 |
+
|
| 100 |
+
### 4. Voice Note Classification
|
| 101 |
+
Train models to automatically categorize voice notes by type (to-do items, meeting notes, ideas, etc.) based on audio characteristics and content.
|
| 102 |
+
|
| 103 |
+
### 5. Multilingual Speech Research
|
| 104 |
+
Study code-switching and multilingual speech patterns in authentic voice recordings containing mixed English, Hebrew, and other languages.
|
| 105 |
+
|
| 106 |
+
## Annotation Schema
|
| 107 |
+
|
| 108 |
+
The dataset uses a comprehensive annotation schema covering:
|
| 109 |
+
|
| 110 |
+
### Audio Quality Indicators
|
| 111 |
+
- Quality ratings: Clear, Good, Fair, Poor, Very Poor
|
| 112 |
+
- Noise types: Traffic, conversations, music, wind, office noise, crying baby
|
| 113 |
+
- Audio challenges and environmental factors
|
| 114 |
+
|
| 115 |
+
### Content & Context
|
| 116 |
+
- Note type classification (8+ categories)
|
| 117 |
+
- Recording location and environment
|
| 118 |
+
- Multi-language indicators
|
| 119 |
+
- Time-of-day metadata
|
| 120 |
+
|
| 121 |
+
### Technical Specifications
|
| 122 |
+
- Microphone types (OnePlus Nord 3 Internal, Poly 5200, ATR 4697, etc.)
|
| 123 |
+
- Bluetooth codecs (SBC, AAC, aptX, aptX HD, LDAC, LC3)
|
| 124 |
+
- Audio format and encoding details
|
| 125 |
+
|
| 126 |
+
### Text Analysis
|
| 127 |
+
- Word count and character count metrics
|
| 128 |
+
- Lexical diversity measurements
|
| 129 |
+
- Speaking rate (WPM) calculations
|
| 130 |
+
- Sentence structure analysis
|
| 131 |
+
|
| 132 |
+
See `ANNOTATION_SCHEMA.md` for complete schema documentation.
|
| 133 |
+
|
| 134 |
+
## Recording Equipment
|
| 135 |
+
|
| 136 |
+
Voice notes were captured using:
|
| 137 |
+
- **OnePlus Nord 3**: Internal microphone (primary device)
|
| 138 |
+
- **Poly 5200**: Bluetooth headset microphone
|
| 139 |
+
- **ATR 4697**: Professional wired microphone
|
| 140 |
+
|
| 141 |
+
Various Bluetooth codecs documented in metadata when applicable.
|
| 142 |
+
|
| 143 |
+
## Dataset Growth
|
| 144 |
|
| 145 |
+
This is an actively growing dataset. New voice notes are continuously added with full annotations and metadata. Check `STATS.md` in the repository for current dataset size and metrics.
|
| 146 |
|
| 147 |
+
## Citation
|
| 148 |
|
| 149 |
+
If you use this dataset in your research, please cite:
|
| 150 |
|
| 151 |
+
```bibtex
|
| 152 |
+
@dataset{rosehill_voicenote_2024,
|
| 153 |
+
author = {Rosehill, Daniel},
|
| 154 |
+
title = {Voice Note Audio Dataset},
|
| 155 |
+
year = {2024},
|
| 156 |
+
publisher = {Hugging Face},
|
| 157 |
+
howpublished = {\url{https://huggingface.co/datasets/danielrosehill/Voice-Note-Audio}}
|
| 158 |
+
}
|
| 159 |
+
```
|
| 160 |
|
| 161 |
+
## License
|
|
|
|
| 162 |
|
| 163 |
+
This dataset is released under the MIT License, allowing for both commercial and non-commercial use with attribution.
|
|
|
|
| 164 |
|
| 165 |
+
## Contact
|
| 166 |
|
| 167 |
+
**Daniel Rosehill**
|
| 168 |
+
- Website: [danielrosehill.com](https://danielrosehill.com)
|
| 169 |
+
- Email: public@danielrosehill.com
|
| 170 |
+
- Hugging Face: [@danielrosehill](https://huggingface.co/danielrosehill)
|
| 171 |
|
| 172 |
+
## Acknowledgments
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 173 |
|
| 174 |
+
AI transcripts provided by [Voicenotes.com](https://voicenotes.com), serving as baseline uncorrected transcripts for comparison with ground truth corrections.
|
| 175 |
|
| 176 |
+
Dataset management interface built using Gradio and Hugging Face Spaces.
|