danielrosehill commited on
Commit
81e2ab9
·
2 Parent(s): 830dfc5 43160e7

Merge branch 'main' of https://huggingface.co/datasets/danielrosehill/Voice-Note-Audio

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - automatic-speech-recognition
4
+ language:
5
+ - en
6
+ pretty_name: "Voice Note Audio"
7
+ size_categories:
8
+ - "n<1K"
9
+ tags:
10
+ - speech-to-text
11
+ - noise-robustness
12
+ - evaluation
13
+ - whisper
14
+ license: mit
15
+ ---
16
+
17
+ # Voice Notes
18
+
19
+ A dataset of voice notes collected by Daniel Rosehill in and around Jerusalem (mostly) in a variety of acoustic environments and in a variety of formats reflecting typical daily use with speech to text transcription apps.
20
+
21
+ This dataset is a subsection of a voice note training dataset that I'm curating for STT fine-tuning and entity recognition.
22
+
23
+ ## Annotation
24
+
25
+ The dataset includes rich annotations collected using Label Studio:
26
+
27
+ - Corrected transcripts (manually corrected AI transcripts)
28
+ - Audio quality ratings
29
+ - Environmental information (recording location, microphone type, etc.)
30
+ - Content classification
31
+ - Audio challenges present
32
+ - Language information
33
+ - Entity recognition
34
+ - Audio source identification
35
+
36
+ ## Label Studio Configuration Parameters
37
+
38
+ ### Audio Challenges Present
39
+ Multiple selection options for identifying audio quality issues:
40
+ - Traffic Noise: Road traffic sounds
41
+ - Audible Conversations: Other people talking
42
+ - Outdoor Noise (General): Street/urban sounds
43
+ - Background Music: Music playing
44
+ - **Crying Baby**: Baby crying sounds (newly added)
45
+
46
+ ### Incidental Audio Pickup Source
47
+ Single selection for identifying the source of incidental audio:
48
+ - **Speaker**: Audio from the primary speaker
49
+ - **Others**: Audio from other sources
50
+
51
+ ### Background Conversation Language
52
+ Single selection for identifying the language of background conversations:
53
+ - **English**
54
+ - **Hebrew**
55
+ - **Arabic**
56
+ - **French**
57
+ - **Russian**
58
+
59
+ ### Multilingual Transcript
60
+ Single selection to indicate if the transcript contains multiple languages:
61
+ - **True**: Transcript contains multiple languages
62
+ - **False**: Transcript is in a single language
63
+
64
+ ### Entities Present in Note
65
+ Multiple selection for identifying named entities mentioned in the voice note:
66
+ - **Dates**: Specific dates or time references
67
+ - **Persons**: Names of people
68
+ - **Placenames**: Geographic locations or places
69
+ - **Email Addresses**: Email addresses mentioned
70
+ - **Blog Title**: Blog or article titles
71
+ - **Acronym**: Acronyms or abbreviations
72
+ - **Organisations**: Company or organization names
73
+
74
+ ### Bluetooth Codec
75
+ Single selection for identifying the Bluetooth codec used during recording:
76
+ - **SBC**: Standard Bluetooth codec
77
+ - **AAC**: Advanced Audio Coding
78
+ - **aptX**: Qualcomm aptX codec
79
+ - **aptX HD**: High-definition aptX codec
80
+ - **LDAC**: Sony LDAC high-quality codec
81
+ - **LC3**: Low Complexity Communication Codec
82
+ - **N/A**: Not applicable (wired/internal mic)
83
+ - **Unknown**: Codec information unavailable
84
+
85
+ ## Microphones Used
86
+
87
+ The voice notes in this dataset were recorded using various microphones:
88
+ - **OnePlus Nord 3 Internal Microphone**: Built-in phone microphone
89
+ - **Poly 5200**: Bluetooth-connected microphone
90
+ - **ATR 4697**: Professional microphone
91
+
92
+ ## Data Organization
93
+
94
+ - `audio/` - Processed audio files (MP3/WAV)
95
+ - `transcripts/` - Transcript files
96
+ - `uncorrected/` - AI-generated transcripts
97
+ - `ground_truths/` - Manually corrected transcripts (ground truth)
98
+ - `annotations/` - Annotation task files and completed annotations
99
+ - `candidate-parameters.md` - Additional parameters for future implementation
100
+ - `preprocessing/` - Workflow for adding new data (see preprocessing/README.md)
101
+
102
+ ## Purpose
103
+
104
+ This collection, consisting of voice notes recorded by Daniel Rosehill using Voicenotes.com, is specifically gathered to evaluate and improve the robustness of Speech-to-Text (STT) systems under non-ideal, real-world conditions. Unlike studio-quality audio used for training, these notes often contain various types of background noise, overlapping conversations, and environmental distortions typical of everyday recording scenarios.
105
+
106
+ This dataset serves three primary objectives:
107
+
108
+ ### 1. Personal STT Fine-Tuning
109
+ Improve speech recognition accuracy for personal voice notes by creating a refined transcription model tailored to individual speech patterns and common recording environments.
110
+
111
+ ### 2. Voice Note Entity Recognition
112
+ Develop a specialized model for the "Voice Router" application to classify and identify entities within voice note recordings, enabling intelligent routing and categorization of voice-based content.
113
+
114
+ ### 3. Public Research Dataset
115
+ Generate a comprehensive, open-source dataset with rich annotations for various audio recording conditions, enabling STT model evaluation across different acoustic environments and contributing to the broader speech recognition research community.
116
+
117
+ The dataset contains approximately 700 voice notes totaling 13 hours of audio. Each audio file comes with an AI-generated transcript provided by Voicenotes.com's STT service, serving as a baseline for comparison. A subset of these transcripts will be manually corrected to create a high-quality ground truth dataset for fine-tuning STT models and developing a comprehensive, nuanced speech recognition research and development framework focused on real-world voice note transcription challenges.
118
+
119
+ ## Contents
120
+
121
+ - `audio/`: Folder containing the original MP3 audio files of the voice notes.
122
+ - `transcripts/`: Folder containing transcript files
123
+ - `uncorrected/`: Raw, AI-generated transcripts corresponding to the audio files
124
+ - `ground_truths/`: Manually corrected transcripts for training and evaluation
125
+ - `dataset_metadata.json`: Metadata associated with the dataset entries.
126
+ - `label_studio_config.xml`: Configuration file for Label Studio, an annotation tool.
127
+ - `setup_annotation.py`: Script to help set up the annotation process.
128
+ - `parameters.md`: A detailed list of parameters to be annotated for each voice note.
129
+
130
+ ## Annotation
131
+
132
+ The `parameters.md` file specifies the key aspects to be annotated for each voice note, including audio quality, speaker characteristics, transcription accuracy, and contextual information. This structured annotation will provide valuable metadata for analyzing STT performance and guiding model improvements.