danielrosehill Claude commited on
Commit
6f1b448
·
1 Parent(s): aedc670

Update dataset README with comprehensive documentation

Browse files

Rewrote the README to reflect the current state of the dataset and provide
comprehensive documentation for users and researchers.

Changes:
- Updated dataset status (190+ annotated voice notes)
- Added detailed feature descriptions and use cases
- Documented the Voice Note Dataset Manager interface
- Included comprehensive annotation schema overview
- Added auto-generated statistics file information
- Improved data organization documentation
- Added citation format and contact information
- Enhanced metadata descriptions (audio, text, technical specs)
- Clarified multilingual and real-world audio aspects
- Added dataset growth and management information

The README now accurately reflects the dataset's capabilities, management
tools, and research applications.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

Files changed (1) hide show
  1. README.md +147 -103
README.md CHANGED
@@ -3,7 +3,7 @@ task_categories:
3
  - automatic-speech-recognition
4
  language:
5
  - en
6
- pretty_name: "Voice Note Audio"
7
  size_categories:
8
  - "n<1K"
9
  tags:
@@ -11,122 +11,166 @@ tags:
11
  - noise-robustness
12
  - evaluation
13
  - whisper
 
 
14
  license: mit
15
  ---
16
 
17
- # Voice Notes
18
-
19
- A dataset of voice notes collected by Daniel Rosehill in and around Jerusalem (mostly) in a variety of acoustic environments and in a variety of formats reflecting typical daily use with speech to text transcription apps.
20
-
21
- This dataset is a subsection of a voice note training dataset that I'm curating for STT fine-tuning and entity recognition.
22
-
23
- ## Annotation
24
-
25
- The dataset includes rich annotations collected using Label Studio:
26
-
27
- - Corrected transcripts (manually corrected AI transcripts)
28
- - Audio quality ratings
29
- - Environmental information (recording location, microphone type, etc.)
30
- - Content classification
31
- - Audio challenges present
32
- - Language information
33
- - Entity recognition
34
- - Audio source identification
35
-
36
- ## Label Studio Configuration Parameters
37
-
38
- ### Audio Challenges Present
39
- Multiple selection options for identifying audio quality issues:
40
- - Traffic Noise: Road traffic sounds
41
- - Audible Conversations: Other people talking
42
- - Outdoor Noise (General): Street/urban sounds
43
- - Background Music: Music playing
44
- - **Crying Baby**: Baby crying sounds (newly added)
45
-
46
- ### Incidental Audio Pickup Source
47
- Single selection for identifying the source of incidental audio:
48
- - **Speaker**: Audio from the primary speaker
49
- - **Others**: Audio from other sources
50
-
51
- ### Background Conversation Language
52
- Single selection for identifying the language of background conversations:
53
- - **English**
54
- - **Hebrew**
55
- - **Arabic**
56
- - **French**
57
- - **Russian**
58
-
59
- ### Multilingual Transcript
60
- Single selection to indicate if the transcript contains multiple languages:
61
- - **True**: Transcript contains multiple languages
62
- - **False**: Transcript is in a single language
63
-
64
- ### Entities Present in Note
65
- Multiple selection for identifying named entities mentioned in the voice note:
66
- - **Dates**: Specific dates or time references
67
- - **Persons**: Names of people
68
- - **Placenames**: Geographic locations or places
69
- - **Email Addresses**: Email addresses mentioned
70
- - **Blog Title**: Blog or article titles
71
- - **Acronym**: Acronyms or abbreviations
72
- - **Organisations**: Company or organization names
73
-
74
- ### Bluetooth Codec
75
- Single selection for identifying the Bluetooth codec used during recording:
76
- - **SBC**: Standard Bluetooth codec
77
- - **AAC**: Advanced Audio Coding
78
- - **aptX**: Qualcomm aptX codec
79
- - **aptX HD**: High-definition aptX codec
80
- - **LDAC**: Sony LDAC high-quality codec
81
- - **LC3**: Low Complexity Communication Codec
82
- - **N/A**: Not applicable (wired/internal mic)
83
- - **Unknown**: Codec information unavailable
84
-
85
- ## Microphones Used
86
-
87
- The voice notes in this dataset were recorded using various microphones:
88
- - **OnePlus Nord 3 Internal Microphone**: Built-in phone microphone
89
- - **Poly 5200**: Bluetooth-connected microphone
90
- - **ATR 4697**: Professional microphone
91
 
92
  ## Data Organization
93
 
94
- - `audio/` - Processed audio files (MP3/WAV)
95
- - `transcripts/` - Transcript files
96
- - `uncorrected/` - AI-generated transcripts
97
- - `ground_truths/` - Manually corrected transcripts (ground truth)
98
- - `annotations/` - Annotation task files and completed annotations
99
- - `candidate-parameters.md` - Additional parameters for future implementation
100
- - `preprocessing/` - Workflow for adding new data (see preprocessing/README.md)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
- ## Purpose
103
 
104
- This collection, consisting of voice notes recorded by Daniel Rosehill using Voicenotes.com, is specifically gathered to evaluate and improve the robustness of Speech-to-Text (STT) systems under non-ideal, real-world conditions. Unlike studio-quality audio used for training, these notes often contain various types of background noise, overlapping conversations, and environmental distortions typical of everyday recording scenarios.
105
 
106
- This dataset serves three primary objectives:
107
 
108
- ### 1. Personal STT Fine-Tuning
109
- Improve speech recognition accuracy for personal voice notes by creating a refined transcription model tailored to individual speech patterns and common recording environments.
 
 
 
 
 
 
 
110
 
111
- ### 2. Voice Note Entity Recognition
112
- Develop a specialized model for the "Voice Router" application to classify and identify entities within voice note recordings, enabling intelligent routing and categorization of voice-based content.
113
 
114
- ### 3. Public Research Dataset
115
- Generate a comprehensive, open-source dataset with rich annotations for various audio recording conditions, enabling STT model evaluation across different acoustic environments and contributing to the broader speech recognition research community.
116
 
117
- The dataset contains approximately 700 voice notes totaling 13 hours of audio. Each audio file comes with an AI-generated transcript provided by Voicenotes.com's STT service, serving as a baseline for comparison. A subset of these transcripts will be manually corrected to create a high-quality ground truth dataset for fine-tuning STT models and developing a comprehensive, nuanced speech recognition research and development framework focused on real-world voice note transcription challenges.
118
 
119
- ## Contents
 
 
 
120
 
121
- - `audio/`: Folder containing the original MP3 audio files of the voice notes.
122
- - `transcripts/`: Folder containing transcript files
123
- - `uncorrected/`: Raw, AI-generated transcripts corresponding to the audio files
124
- - `ground_truths/`: Manually corrected transcripts for training and evaluation
125
- - `dataset_metadata.json`: Metadata associated with the dataset entries.
126
- - `label_studio_config.xml`: Configuration file for Label Studio, an annotation tool.
127
- - `setup_annotation.py`: Script to help set up the annotation process.
128
- - `parameters.md`: A detailed list of parameters to be annotated for each voice note.
129
 
130
- ## Annotation
131
 
132
- The `parameters.md` file specifies the key aspects to be annotated for each voice note, including audio quality, speaker characteristics, transcription accuracy, and contextual information. This structured annotation will provide valuable metadata for analyzing STT performance and guiding model improvements.
 
3
  - automatic-speech-recognition
4
  language:
5
  - en
6
+ pretty_name: "Voice Note Audio Dataset"
7
  size_categories:
8
  - "n<1K"
9
  tags:
 
11
  - noise-robustness
12
  - evaluation
13
  - whisper
14
+ - real-world-audio
15
+ - voice-notes
16
  license: mit
17
  ---
18
 
19
+ # Voice Note Audio Dataset
20
+
21
+ A curated dataset of real-world voice notes collected by Daniel Rosehill, primarily recorded in and around Jerusalem, Israel. This dataset captures authentic voice recordings in diverse acoustic environments and formats, reflecting typical daily usage patterns with speech-to-text transcription applications.
22
+
23
+ **Current Status:** 190+ annotated voice notes with comprehensive metadata
24
+
25
+ ## Dataset Overview
26
+
27
+ This dataset is part of a larger voice note training collection being curated for STT fine-tuning, entity recognition, and real-world speech recognition evaluation. Unlike studio-quality audio commonly used in speech recognition training, these recordings intentionally include the challenges present in everyday voice note usage:
28
+
29
+ - Variable background noise (traffic, conversations, music)
30
+ - Different recording environments (indoor, outdoor, vehicles)
31
+ - Multiple microphone types and Bluetooth codecs
32
+ - Natural speaking patterns and multilingual content
33
+ - Real-world audio quality variations
34
+
35
+ ## Key Features
36
+
37
+ ### Comprehensive Annotations
38
+
39
+ Each voice note includes rich metadata stored in JSON format:
40
+
41
+ - **Audio Metadata**: Duration, bitrate, sample rate, file format, codec information
42
+ - **Transcripts**: AI-generated (uncorrected) and manually corrected ground truth versions
43
+ - **Text Metrics**: Word count, character count, lexical diversity, WPM (words per minute)
44
+ - **Quality Ratings**: Audio quality assessments, noise type classification
45
+ - **Environmental Context**: Recording location, time of day, background conditions
46
+ - **Content Classification**: Note type (email draft, to-do, idea, meeting note, etc.)
47
+ - **Language Information**: Primary language, multilingual indicators, mixed-language notes
48
+ - **Technical Details**: Microphone type, Bluetooth codec, recording device
49
+
50
+ ### Dataset Statistics
51
+
52
+ The repository includes an auto-generated `STATS.md` file with comprehensive metrics:
53
+
54
+ - Total audio duration and word count across all recordings
55
+ - Average duration and word count per note
56
+ - Dataset completeness percentages (transcripts, corrections, annotations)
57
+ - Character counts and text complexity metrics
58
+
59
+ Statistics are automatically updated when new recordings are added to the dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
  ## Data Organization
62
 
63
+ ```
64
+ Voice-Note-Audio/
65
+ ├── audio/ # Audio files (MP3, WAV, M4A, OGG)
66
+ ├── transcripts/
67
+ │ ├── uncorrected/ # AI-generated transcripts from STT
68
+ │ └── ground_truths/ # Manually corrected transcripts
69
+ ├── annotations/ # JSON metadata for each recording
70
+ ├── STATS.md # Auto-generated dataset statistics
71
+ ├── ANNOTATION_SCHEMA.md # Detailed schema documentation
72
+ └── annotation_schema_v1.json # JSON schema specification
73
+ ```
74
+
75
+ Files are numbered sequentially (e.g., `1.mp3`, `1.txt`, `1.json`) for easy cross-referencing.
76
+
77
+ ## Dataset Management
78
+
79
+ This dataset is actively managed using a custom Hugging Face Space application: **Voice Note Dataset Manager**
80
+
81
+ The management interface provides:
82
+ - Quick upload functionality with batch processing
83
+ - Automated metadata extraction and calculation
84
+ - Real-time statistics tracking and visualization
85
+ - Browse, edit, and delete capabilities
86
+ - Comprehensive annotation support
87
+ - Automatic stats file generation
88
+
89
+ ## Use Cases
90
+
91
+ ### 1. STT Model Fine-Tuning
92
+ Train and evaluate speech recognition models on real-world voice notes with natural noise and speaking patterns, improving accuracy for everyday recording conditions.
93
+
94
+ ### 2. Noise Robustness Evaluation
95
+ Benchmark STT systems against various background noise types and acoustic challenges commonly encountered in voice note applications.
96
+
97
+ ### 3. Entity Recognition Development
98
+ Develop specialized NER (Named Entity Recognition) models for voice notes to identify dates, names, locations, organizations, and other entities in spoken content.
99
+
100
+ ### 4. Voice Note Classification
101
+ Train models to automatically categorize voice notes by type (to-do items, meeting notes, ideas, etc.) based on audio characteristics and content.
102
+
103
+ ### 5. Multilingual Speech Research
104
+ Study code-switching and multilingual speech patterns in authentic voice recordings containing mixed English, Hebrew, and other languages.
105
+
106
+ ## Annotation Schema
107
+
108
+ The dataset uses a comprehensive annotation schema covering:
109
+
110
+ ### Audio Quality Indicators
111
+ - Quality ratings: Clear, Good, Fair, Poor, Very Poor
112
+ - Noise types: Traffic, conversations, music, wind, office noise, crying baby
113
+ - Audio challenges and environmental factors
114
+
115
+ ### Content & Context
116
+ - Note type classification (8+ categories)
117
+ - Recording location and environment
118
+ - Multi-language indicators
119
+ - Time-of-day metadata
120
+
121
+ ### Technical Specifications
122
+ - Microphone types (OnePlus Nord 3 Internal, Poly 5200, ATR 4697, etc.)
123
+ - Bluetooth codecs (SBC, AAC, aptX, aptX HD, LDAC, LC3)
124
+ - Audio format and encoding details
125
+
126
+ ### Text Analysis
127
+ - Word count and character count metrics
128
+ - Lexical diversity measurements
129
+ - Speaking rate (WPM) calculations
130
+ - Sentence structure analysis
131
+
132
+ See `ANNOTATION_SCHEMA.md` for complete schema documentation.
133
+
134
+ ## Recording Equipment
135
+
136
+ Voice notes were captured using:
137
+ - **OnePlus Nord 3**: Internal microphone (primary device)
138
+ - **Poly 5200**: Bluetooth headset microphone
139
+ - **ATR 4697**: Professional wired microphone
140
+
141
+ Various Bluetooth codecs documented in metadata when applicable.
142
+
143
+ ## Dataset Growth
144
 
145
+ This is an actively growing dataset. New voice notes are continuously added with full annotations and metadata. Check `STATS.md` in the repository for current dataset size and metrics.
146
 
147
+ ## Citation
148
 
149
+ If you use this dataset in your research, please cite:
150
 
151
+ ```bibtex
152
+ @dataset{rosehill_voicenote_2024,
153
+ author = {Rosehill, Daniel},
154
+ title = {Voice Note Audio Dataset},
155
+ year = {2024},
156
+ publisher = {Hugging Face},
157
+ howpublished = {\url{https://huggingface.co/datasets/danielrosehill/Voice-Note-Audio}}
158
+ }
159
+ ```
160
 
161
+ ## License
 
162
 
163
+ This dataset is released under the MIT License, allowing for both commercial and non-commercial use with attribution.
 
164
 
165
+ ## Contact
166
 
167
+ **Daniel Rosehill**
168
+ - Website: [danielrosehill.com](https://danielrosehill.com)
169
+ - Email: public@danielrosehill.com
170
+ - Hugging Face: [@danielrosehill](https://huggingface.co/danielrosehill)
171
 
172
+ ## Acknowledgments
 
 
 
 
 
 
 
173
 
174
+ AI transcripts provided by [Voicenotes.com](https://voicenotes.com), serving as baseline uncorrected transcripts for comparison with ground truth corrections.
175
 
176
+ Dataset management interface built using Gradio and Hugging Face Spaces.