Commit
·
6158f1c
0
Parent(s):
Initial commit
Browse files- .claude/settings.local.json +9 -0
- .gitattributes +59 -0
- .gitignore +6 -0
- README.md +132 -0
- annotations/task_list.json +15 -0
- audio/1.mp3 +3 -0
- audio/10.mp3 +3 -0
- audio/11.mp3 +3 -0
- audio/12.mp3 +3 -0
- audio/13.mp3 +3 -0
- audio/2.mp3 +3 -0
- audio/3.mp3 +3 -0
- audio/4.mp3 +3 -0
- audio/5.mp3 +3 -0
- audio/6.mp3 +3 -0
- audio/7.mp3 +3 -0
- audio/8.mp3 +3 -0
- audio/9.mp3 +3 -0
- calculated-parameters.md +136 -0
- candidate-parameters.md +138 -0
- dataset_metadata.json +7 -0
- label_studio_config.xml +181 -0
- parameters.md +98 -0
- preprocessing/README.md +22 -0
- preprocessing/move_to_dataset.py +71 -0
- setup_annotation.py +130 -0
- transcripts/ground_truths/1.txt +1 -0
- transcripts/uncorrected/1.txt +1 -0
- transcripts/uncorrected/10.txt +7 -0
- transcripts/uncorrected/11.txt +9 -0
- transcripts/uncorrected/12.txt +5 -0
- transcripts/uncorrected/13.txt +5 -0
- transcripts/uncorrected/2.txt +5 -0
- transcripts/uncorrected/3.txt +3 -0
- transcripts/uncorrected/4.txt +5 -0
- transcripts/uncorrected/5.txt +7 -0
- transcripts/uncorrected/6.txt +5 -0
- transcripts/uncorrected/7.txt +9 -0
- transcripts/uncorrected/8.txt +9 -0
- transcripts/uncorrected/9.txt +3 -0
.claude/settings.local.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"permissions": {
|
| 3 |
+
"allow": [
|
| 4 |
+
"Bash(rm:*)"
|
| 5 |
+
],
|
| 6 |
+
"deny": [],
|
| 7 |
+
"ask": []
|
| 8 |
+
}
|
| 9 |
+
}
|
.gitattributes
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.mds filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
# Audio files - uncompressed
|
| 39 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
# Audio files - compressed
|
| 43 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
# Image files - uncompressed
|
| 49 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 53 |
+
# Image files - compressed
|
| 54 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 57 |
+
# Video files - compressed
|
| 58 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
+
*.webm filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
# Preprocessing
|
| 3 |
+
preprocessing/raw_audio/*
|
| 4 |
+
preprocessing/transcripts/*
|
| 5 |
+
!preprocessing/README.md
|
| 6 |
+
!preprocessing/move_to_dataset.py
|
README.md
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- automatic-speech-recognition
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
pretty_name: "Voice Note Audio"
|
| 7 |
+
size_categories:
|
| 8 |
+
- "n<1K"
|
| 9 |
+
tags:
|
| 10 |
+
- speech-to-text
|
| 11 |
+
- noise-robustness
|
| 12 |
+
- evaluation
|
| 13 |
+
- whisper
|
| 14 |
+
license: mit
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# Voice Notes
|
| 18 |
+
|
| 19 |
+
A dataset of voice notes collected by Daniel Rosehill in and around Jerusalem (mostly) in a variety of acoustic environments and in a variety of formats reflecting typical daily use with speech to text transcription apps.
|
| 20 |
+
|
| 21 |
+
This dataset is a subsection of a voice note training dataset that I'm curating for STT fine-tuning and entity recognition.
|
| 22 |
+
|
| 23 |
+
## Annotation
|
| 24 |
+
|
| 25 |
+
The dataset includes rich annotations collected using Label Studio:
|
| 26 |
+
|
| 27 |
+
- Corrected transcripts (manually corrected AI transcripts)
|
| 28 |
+
- Audio quality ratings
|
| 29 |
+
- Environmental information (recording location, microphone type, etc.)
|
| 30 |
+
- Content classification
|
| 31 |
+
- Audio challenges present
|
| 32 |
+
- Language information
|
| 33 |
+
- Entity recognition
|
| 34 |
+
- Audio source identification
|
| 35 |
+
|
| 36 |
+
## Label Studio Configuration Parameters
|
| 37 |
+
|
| 38 |
+
### Audio Challenges Present
|
| 39 |
+
Multiple selection options for identifying audio quality issues:
|
| 40 |
+
- Traffic Noise: Road traffic sounds
|
| 41 |
+
- Audible Conversations: Other people talking
|
| 42 |
+
- Outdoor Noise (General): Street/urban sounds
|
| 43 |
+
- Background Music: Music playing
|
| 44 |
+
- **Crying Baby**: Baby crying sounds (newly added)
|
| 45 |
+
|
| 46 |
+
### Incidental Audio Pickup Source
|
| 47 |
+
Single selection for identifying the source of incidental audio:
|
| 48 |
+
- **Speaker**: Audio from the primary speaker
|
| 49 |
+
- **Others**: Audio from other sources
|
| 50 |
+
|
| 51 |
+
### Background Conversation Language
|
| 52 |
+
Single selection for identifying the language of background conversations:
|
| 53 |
+
- **English**
|
| 54 |
+
- **Hebrew**
|
| 55 |
+
- **Arabic**
|
| 56 |
+
- **French**
|
| 57 |
+
- **Russian**
|
| 58 |
+
|
| 59 |
+
### Multilingual Transcript
|
| 60 |
+
Single selection to indicate if the transcript contains multiple languages:
|
| 61 |
+
- **True**: Transcript contains multiple languages
|
| 62 |
+
- **False**: Transcript is in a single language
|
| 63 |
+
|
| 64 |
+
### Entities Present in Note
|
| 65 |
+
Multiple selection for identifying named entities mentioned in the voice note:
|
| 66 |
+
- **Dates**: Specific dates or time references
|
| 67 |
+
- **Persons**: Names of people
|
| 68 |
+
- **Placenames**: Geographic locations or places
|
| 69 |
+
- **Email Addresses**: Email addresses mentioned
|
| 70 |
+
- **Blog Title**: Blog or article titles
|
| 71 |
+
- **Acronym**: Acronyms or abbreviations
|
| 72 |
+
- **Organisations**: Company or organization names
|
| 73 |
+
|
| 74 |
+
### Bluetooth Codec
|
| 75 |
+
Single selection for identifying the Bluetooth codec used during recording:
|
| 76 |
+
- **SBC**: Standard Bluetooth codec
|
| 77 |
+
- **AAC**: Advanced Audio Coding
|
| 78 |
+
- **aptX**: Qualcomm aptX codec
|
| 79 |
+
- **aptX HD**: High-definition aptX codec
|
| 80 |
+
- **LDAC**: Sony LDAC high-quality codec
|
| 81 |
+
- **LC3**: Low Complexity Communication Codec
|
| 82 |
+
- **N/A**: Not applicable (wired/internal mic)
|
| 83 |
+
- **Unknown**: Codec information unavailable
|
| 84 |
+
|
| 85 |
+
## Microphones Used
|
| 86 |
+
|
| 87 |
+
The voice notes in this dataset were recorded using various microphones:
|
| 88 |
+
- **OnePlus Nord 3 Internal Microphone**: Built-in phone microphone
|
| 89 |
+
- **Poly 5200**: Bluetooth-connected microphone
|
| 90 |
+
- **ATR 4697**: Professional microphone
|
| 91 |
+
|
| 92 |
+
## Data Organization
|
| 93 |
+
|
| 94 |
+
- `audio/` - Processed audio files (MP3/WAV)
|
| 95 |
+
- `transcripts/` - Transcript files
|
| 96 |
+
- `uncorrected/` - AI-generated transcripts
|
| 97 |
+
- `ground_truths/` - Manually corrected transcripts (ground truth)
|
| 98 |
+
- `annotations/` - Annotation task files and completed annotations
|
| 99 |
+
- `candidate-parameters.md` - Additional parameters for future implementation
|
| 100 |
+
- `preprocessing/` - Workflow for adding new data (see preprocessing/README.md)
|
| 101 |
+
|
| 102 |
+
## Purpose
|
| 103 |
+
|
| 104 |
+
This collection, consisting of voice notes recorded by Daniel Rosehill using Voicenotes.com, is specifically gathered to evaluate and improve the robustness of Speech-to-Text (STT) systems under non-ideal, real-world conditions. Unlike studio-quality audio used for training, these notes often contain various types of background noise, overlapping conversations, and environmental distortions typical of everyday recording scenarios.
|
| 105 |
+
|
| 106 |
+
This dataset serves three primary objectives:
|
| 107 |
+
|
| 108 |
+
### 1. Personal STT Fine-Tuning
|
| 109 |
+
Improve speech recognition accuracy for personal voice notes by creating a refined transcription model tailored to individual speech patterns and common recording environments.
|
| 110 |
+
|
| 111 |
+
### 2. Voice Note Entity Recognition
|
| 112 |
+
Develop a specialized model for the "Voice Router" application to classify and identify entities within voice note recordings, enabling intelligent routing and categorization of voice-based content.
|
| 113 |
+
|
| 114 |
+
### 3. Public Research Dataset
|
| 115 |
+
Generate a comprehensive, open-source dataset with rich annotations for various audio recording conditions, enabling STT model evaluation across different acoustic environments and contributing to the broader speech recognition research community.
|
| 116 |
+
|
| 117 |
+
The dataset contains approximately 700 voice notes totaling 13 hours of audio. Each audio file comes with an AI-generated transcript provided by Voicenotes.com's STT service, serving as a baseline for comparison. A subset of these transcripts will be manually corrected to create a high-quality ground truth dataset for fine-tuning STT models and developing a comprehensive, nuanced speech recognition research and development framework focused on real-world voice note transcription challenges.
|
| 118 |
+
|
| 119 |
+
## Contents
|
| 120 |
+
|
| 121 |
+
- `audio/`: Folder containing the original MP3 audio files of the voice notes.
|
| 122 |
+
- `transcripts/`: Folder containing transcript files
|
| 123 |
+
- `uncorrected/`: Raw, AI-generated transcripts corresponding to the audio files
|
| 124 |
+
- `ground_truths/`: Manually corrected transcripts for training and evaluation
|
| 125 |
+
- `dataset_metadata.json`: Metadata associated with the dataset entries.
|
| 126 |
+
- `label_studio_config.xml`: Configuration file for Label Studio, an annotation tool.
|
| 127 |
+
- `setup_annotation.py`: Script to help set up the annotation process.
|
| 128 |
+
- `parameters.md`: A detailed list of parameters to be annotated for each voice note.
|
| 129 |
+
|
| 130 |
+
## Annotation
|
| 131 |
+
|
| 132 |
+
The `parameters.md` file specifies the key aspects to be annotated for each voice note, including audio quality, speaker characteristics, transcription accuracy, and contextual information. This structured annotation will provide valuable metadata for analyzing STT performance and guiding model improvements.
|
annotations/task_list.json
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"id": "1",
|
| 4 |
+
"audio_path": "audio/1.mp3",
|
| 5 |
+
"ai_transcript": "This is going to be version 2 refactor of the Hugging Face voice note training data set because I've learned tonight how to do data annotation properly using Label Studio so it's going to be a start from scratch in the interest of making it much easier to maintain this data set and upload additional notes as I go along.",
|
| 6 |
+
"corrected_transcript": "",
|
| 7 |
+
"parameters": {
|
| 8 |
+
"speaker_info": "",
|
| 9 |
+
"audio_quality": "",
|
| 10 |
+
"environment": "",
|
| 11 |
+
"corrections_needed": []
|
| 12 |
+
},
|
| 13 |
+
"status": "pending"
|
| 14 |
+
}
|
| 15 |
+
]
|
audio/1.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6a10240b7d269b288944413ec681fa62191fa6e2288bfe5053a75d84c2ebe04f
|
| 3 |
+
size 571436
|
audio/10.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bdb3a03d6767499673b0d6d33e20dc79b823380b1cf231b180458e8f18b26f67
|
| 3 |
+
size 2995244
|
audio/11.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:82f2b46d25f630a0acfaa49ebcd68aecdcd7ba2269b3e5cdc4519e64a8fce23f
|
| 3 |
+
size 1094444
|
audio/12.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:54336aa4d6505cc37a5ce9df870a45199f37594f471162d7539ab0ddb8333297
|
| 3 |
+
size 2789036
|
audio/13.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:593b888e1ad629e828e88363fe486799e3be855338bcf27700b3422a560eed29
|
| 3 |
+
size 2927276
|
audio/2.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a692404713ff9ab3d9e7e240fecd07b8b77bde851c0f8486d7326d3f0de15c22
|
| 3 |
+
size 4023236
|
audio/3.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:40de252f68758768c600f85d1ee7987807b875ad0e082813e59ba48f80cc4b56
|
| 3 |
+
size 1051244
|
audio/4.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f62adf5c77228a2d63be6f805138d373ee84881b3f639337a752ba180781569b
|
| 3 |
+
size 1809836
|
audio/5.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a07b6bd8bc3c547d2c67f026be23eb7692db494e5eb886a528328a9db85967e
|
| 3 |
+
size 1507436
|
audio/6.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ce100bf89d3e36ef5b62b8723470a3712b8ff03486d8332c5fd4c83f458829fa
|
| 3 |
+
size 780524
|
audio/7.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:696564fb58f57df910fd633d008762ee0d1222d32f9dcea299c26ccce3084229
|
| 3 |
+
size 2534883
|
audio/8.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8dad9ac7ad7c2029df39d64a96d07698b5a1de0b66ce8eb9cd3a98ca3ea41700
|
| 3 |
+
size 1042604
|
audio/9.mp3
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fbfa672e9a3954dee5630183e33056b46f620c01d68e1c8563fcfa3200873235
|
| 3 |
+
size 878444
|
calculated-parameters.md
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Calculated Parameters
|
| 2 |
+
|
| 3 |
+
This document outlines parameters that will be calculated programmatically from audio files and metadata. These automated calculations will enhance the dataset's research value without requiring manual annotation.
|
| 4 |
+
|
| 5 |
+
## Temporal Parameters
|
| 6 |
+
|
| 7 |
+
### Recording Timestamp
|
| 8 |
+
- **Description**: Exact timestamp when the voice note was recorded
|
| 9 |
+
- **Format**: ISO 8601 format (YYYY-MM-DDTHH:MM:SSZ)
|
| 10 |
+
- **Source**: Audio file metadata or filename parsing
|
| 11 |
+
- **Research Value**: Enables temporal analysis of voice note patterns and quality variations over time
|
| 12 |
+
- **Implementation**: Extract from file creation date, EXIF data, or filename timestamps
|
| 13 |
+
|
| 14 |
+
### Time of Day
|
| 15 |
+
- **Description**: Categorical time period based on recording timestamp
|
| 16 |
+
- **Options**:
|
| 17 |
+
- **Morning**: 06:00 - 11:59
|
| 18 |
+
- **Afternoon**: 12:00 - 17:59
|
| 19 |
+
- **Evening**: 18:00 - 21:59
|
| 20 |
+
- **Night**: 22:00 - 05:59
|
| 21 |
+
- **Source**: Derived from recording timestamp
|
| 22 |
+
- **Research Value**: Correlate with voice quality, background noise patterns, and speech characteristics that may vary throughout the day
|
| 23 |
+
- **Implementation**: Programmatic categorization based on hour component of timestamp
|
| 24 |
+
|
| 25 |
+
## Audio Signal Analysis
|
| 26 |
+
|
| 27 |
+
### Signal-to-Noise Ratio (SNR)
|
| 28 |
+
- **Description**: Computed dB ratio between speech signal and background noise
|
| 29 |
+
- **Measurement**: Automatic calculation from audio analysis using voice activity detection
|
| 30 |
+
- **Research Value**: Critical for understanding STT performance degradation in noisy environments
|
| 31 |
+
- **Implementation**: FFT analysis with VAD segmentation
|
| 32 |
+
|
| 33 |
+
### Audio Duration
|
| 34 |
+
- **Description**: Precise length of the audio file
|
| 35 |
+
- **Format**: Seconds (floating point)
|
| 36 |
+
- **Source**: Audio file header analysis
|
| 37 |
+
- **Research Value**: Correlate recording length with transcription accuracy and user behavior patterns
|
| 38 |
+
|
| 39 |
+
### Average dB Level
|
| 40 |
+
- **Description**: Root Mean Square (RMS) amplitude level of the audio
|
| 41 |
+
- **Measurement**: dB relative to full scale (dBFS)
|
| 42 |
+
- **Research Value**: Understand recording volume consistency and microphone gain settings
|
| 43 |
+
|
| 44 |
+
### Peak dB Level
|
| 45 |
+
- **Description**: Maximum amplitude peak in the audio
|
| 46 |
+
- **Measurement**: Peak dB level relative to full scale
|
| 47 |
+
- **Research Value**: Identify clipping and dynamic range issues
|
| 48 |
+
|
| 49 |
+
### Dynamic Range
|
| 50 |
+
- **Description**: Difference between loudest and quietest audio segments
|
| 51 |
+
- **Measurement**: dB difference between peak and RMS levels
|
| 52 |
+
- **Research Value**: Indicates recording quality and compression effects
|
| 53 |
+
|
| 54 |
+
## Linguistic Analysis
|
| 55 |
+
|
| 56 |
+
### Word Count
|
| 57 |
+
- **Description**: Total number of words in the transcript
|
| 58 |
+
- **Source**: Tokenized transcript analysis
|
| 59 |
+
- **Research Value**: Correlate transcript length with accuracy and recording duration
|
| 60 |
+
|
| 61 |
+
### Character Count
|
| 62 |
+
- **Description**: Total number of characters in the transcript (excluding spaces)
|
| 63 |
+
- **Source**: String length analysis of transcript
|
| 64 |
+
- **Research Value**: Alternative length metric for analysis
|
| 65 |
+
|
| 66 |
+
### Estimated Speaking Rate
|
| 67 |
+
- **Description**: Words per minute calculation
|
| 68 |
+
- **Calculation**: (Word Count / Audio Duration in minutes)
|
| 69 |
+
- **Research Value**: Identify fast/slow speech patterns that may affect STT accuracy
|
| 70 |
+
|
| 71 |
+
### Speaking Rate Classification
|
| 72 |
+
- **Description**: Categorical classification of speaking speed
|
| 73 |
+
- **Options**:
|
| 74 |
+
- **Very Slow**: < 100 WPM
|
| 75 |
+
- **Slow**: 100-130 WPM
|
| 76 |
+
- **Normal**: 130-160 WPM
|
| 77 |
+
- **Fast**: 160-190 WPM
|
| 78 |
+
- **Very Fast**: > 190 WPM
|
| 79 |
+
- **Source**: Derived from estimated speaking rate
|
| 80 |
+
- **Research Value**: Group recordings by speech tempo for targeted analysis
|
| 81 |
+
|
| 82 |
+
## File Metadata
|
| 83 |
+
|
| 84 |
+
### File Size
|
| 85 |
+
- **Description**: Size of the audio file in bytes
|
| 86 |
+
- **Source**: File system metadata
|
| 87 |
+
- **Research Value**: Understand compression ratios and storage requirements
|
| 88 |
+
|
| 89 |
+
### Audio Format
|
| 90 |
+
- **Description**: Audio codec and container format
|
| 91 |
+
- **Examples**: MP3, WAV, M4A, OGG
|
| 92 |
+
- **Source**: File header analysis
|
| 93 |
+
- **Research Value**: Account for compression artifacts in STT performance
|
| 94 |
+
|
| 95 |
+
### Sample Rate
|
| 96 |
+
- **Description**: Audio sampling frequency
|
| 97 |
+
- **Units**: Hz (e.g., 44100, 48000, 16000)
|
| 98 |
+
- **Source**: Audio file header
|
| 99 |
+
- **Research Value**: Correlate sampling rate with transcription quality
|
| 100 |
+
|
| 101 |
+
### Bit Depth
|
| 102 |
+
- **Description**: Audio bit depth/resolution
|
| 103 |
+
- **Units**: bits (e.g., 16, 24, 32)
|
| 104 |
+
- **Source**: Audio file header
|
| 105 |
+
- **Research Value**: Understand dynamic range capabilities
|
| 106 |
+
|
| 107 |
+
## Quality Metrics
|
| 108 |
+
|
| 109 |
+
### Word Error Rate (WER)
|
| 110 |
+
- **Description**: Calculated WER comparing AI transcript to corrected transcript
|
| 111 |
+
- **Calculation**: (Substitutions + Deletions + Insertions) / Total Reference Words
|
| 112 |
+
- **Source**: Comparison between original and corrected transcripts
|
| 113 |
+
- **Research Value**: Primary STT performance metric
|
| 114 |
+
|
| 115 |
+
### Character Error Rate (CER)
|
| 116 |
+
- **Description**: Character-level error rate
|
| 117 |
+
- **Calculation**: Similar to WER but at character level
|
| 118 |
+
- **Source**: Character-level comparison of transcripts
|
| 119 |
+
- **Research Value**: More granular accuracy assessment
|
| 120 |
+
|
| 121 |
+
## Implementation Notes
|
| 122 |
+
|
| 123 |
+
- All calculations will be performed automatically during dataset preprocessing
|
| 124 |
+
- Timestamps will be normalized to UTC for consistency
|
| 125 |
+
- Error rate calculations require both original and corrected transcripts
|
| 126 |
+
- Audio analysis parameters require signal processing libraries (librosa, pyAudio, etc.)
|
| 127 |
+
- Results will be stored in dataset metadata JSON files
|
| 128 |
+
- Failed calculations will be marked as null/missing values with error logging
|
| 129 |
+
|
| 130 |
+
## Future Enhancements
|
| 131 |
+
|
| 132 |
+
- **Spectral Analysis**: Frequency domain characteristics
|
| 133 |
+
- **Voice Activity Detection**: Percentage of audio containing speech
|
| 134 |
+
- **Pause Analysis**: Distribution of silent segments
|
| 135 |
+
- **Prosodic Features**: Pitch, tempo, and rhythm analysis
|
| 136 |
+
- **Language Detection**: Automatic language identification confidence scores
|
candidate-parameters.md
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Candidate Parameters for Future Implementation
|
| 2 |
+
|
| 3 |
+
This document outlines additional parameters that could enhance the dataset's research value for STT evaluation and fine-tuning. These parameters are candidates for future annotation phases.
|
| 4 |
+
|
| 5 |
+
## Audio Signal Quality
|
| 6 |
+
|
| 7 |
+
### Signal-to-Noise Ratio (SNR)
|
| 8 |
+
- **Description**: Computed dB ratio between speech signal and background noise
|
| 9 |
+
- **Measurement**: Automatic calculation from audio analysis
|
| 10 |
+
- **Research Value**: Critical for understanding STT performance degradation
|
| 11 |
+
|
| 12 |
+
### Audio Clipping Detection
|
| 13 |
+
- **Description**: Whether audio peaks are clipped or distorted
|
| 14 |
+
- **Options**: [None, Minimal (<1% samples), Moderate (1-5%), Severe (>5%)]
|
| 15 |
+
- **Research Value**: Identifies recordings with digital distortion artifacts
|
| 16 |
+
|
| 17 |
+
### Dynamic Range
|
| 18 |
+
- **Description**: Difference between loudest and quietest audio segments
|
| 19 |
+
- **Measurement**: dB difference between peak and RMS levels
|
| 20 |
+
- **Research Value**: Indicates recording quality and compression effects
|
| 21 |
+
|
| 22 |
+
### Frequency Response Issues
|
| 23 |
+
- **Description**: Low-pass filtering effects from phone microphones
|
| 24 |
+
- **Options**: [Full Range, Slight Roll-off, Moderate Filtering, Heavy Filtering]
|
| 25 |
+
- **Research Value**: Understanding microphone limitations on STT accuracy
|
| 26 |
+
|
| 27 |
+
## Speech Characteristics
|
| 28 |
+
|
| 29 |
+
### Accent/Dialect Strength
|
| 30 |
+
- **Description**: How pronounced regional speech patterns are
|
| 31 |
+
- **Options**: [None/Standard, Slight, Moderate, Strong, Very Strong]
|
| 32 |
+
- **Research Value**: Evaluating STT robustness across dialects
|
| 33 |
+
|
| 34 |
+
### Emotional State
|
| 35 |
+
- **Description**: Speaker's emotional state affecting speech patterns
|
| 36 |
+
- **Options**: [Calm, Excited, Frustrated, Tired, Stressed, Other]
|
| 37 |
+
- **Research Value**: Understanding how emotion affects STT accuracy
|
| 38 |
+
|
| 39 |
+
### Speech Hesitations
|
| 40 |
+
- **Description**: Frequency of disfluencies and self-corrections
|
| 41 |
+
- **Options**: [None, Rare (<5%), Occasional (5-15%), Frequent (15-30%), Very Frequent (>30%)]
|
| 42 |
+
- **Research Value**: Testing STT handling of natural speech patterns
|
| 43 |
+
|
| 44 |
+
### Articulation Quality
|
| 45 |
+
- **Description**: Clarity of speech production
|
| 46 |
+
- **Options**: [Very Clear, Clear, Slightly Unclear, Unclear, Mumbled]
|
| 47 |
+
- **Research Value**: Correlating articulation with transcription accuracy
|
| 48 |
+
|
| 49 |
+
## Linguistic Complexity
|
| 50 |
+
|
| 51 |
+
### Proper Noun Density
|
| 52 |
+
- **Description**: Percentage of words that are names, places, or brands
|
| 53 |
+
- **Calculation**: (Proper nouns / Total words) × 100
|
| 54 |
+
- **Research Value**: Evaluating STT performance on out-of-vocabulary terms
|
| 55 |
+
|
| 56 |
+
### Domain-Specific Vocabulary
|
| 57 |
+
- **Description**: Presence of technical, specialized, or foreign terminology
|
| 58 |
+
- **Options**: [None, Low (<5%), Moderate (5-15%), High (15-30%), Very High (>30%)]
|
| 59 |
+
- **Research Value**: Testing STT adaptation to specialized domains
|
| 60 |
+
|
| 61 |
+
### Sentence Structure Complexity
|
| 62 |
+
- **Description**: Grammatical complexity of spoken sentences
|
| 63 |
+
- **Options**: [Simple, Compound, Complex, Very Complex, Fragmented]
|
| 64 |
+
- **Research Value**: Understanding parsing challenges for STT systems
|
| 65 |
+
|
| 66 |
+
### Out-of-Vocabulary (OOV) Rate
|
| 67 |
+
- **Description**: Estimated percentage of words not in common STT vocabularies
|
| 68 |
+
- **Calculation**: Based on comparison with standard word lists
|
| 69 |
+
- **Research Value**: Predicting STT performance on novel content
|
| 70 |
+
|
| 71 |
+
## Recording Context
|
| 72 |
+
|
| 73 |
+
### Device Movement
|
| 74 |
+
- **Description**: Movement pattern during recording
|
| 75 |
+
- **Options**: [Stationary, Slight Movement, Walking, In Vehicle, Other Motion]
|
| 76 |
+
- **Research Value**: Understanding motion effects on audio quality
|
| 77 |
+
|
| 78 |
+
### Distance from Microphone
|
| 79 |
+
- **Description**: Estimated speaker distance from recording device
|
| 80 |
+
- **Options**: [Close (<6"), Normal (6-18"), Far (18-36"), Very Far (>36")]
|
| 81 |
+
- **Research Value**: Evaluating near-field vs. far-field performance
|
| 82 |
+
|
| 83 |
+
### Recording App/Service
|
| 84 |
+
- **Description**: Application used for recording (affects preprocessing)
|
| 85 |
+
- **Options**: [Voicenotes.com, Native Voice Memo, WhatsApp, Zoom, Other]
|
| 86 |
+
- **Research Value**: Understanding preprocessing effects on STT
|
| 87 |
+
|
| 88 |
+
### Time of Day
|
| 89 |
+
- **Description**: When the recording was made
|
| 90 |
+
- **Options**: [Early Morning, Morning, Afternoon, Evening, Late Night]
|
| 91 |
+
- **Research Value**: Correlating with voice fatigue and background patterns
|
| 92 |
+
|
| 93 |
+
## STT Challenge Categories
|
| 94 |
+
|
| 95 |
+
### Homophones Present
|
| 96 |
+
- **Description**: Words that sound alike but have different meanings
|
| 97 |
+
- **Detection**: Manual annotation or automated detection
|
| 98 |
+
- **Research Value**: Testing semantic disambiguation in STT
|
| 99 |
+
|
| 100 |
+
### Code-Switching
|
| 101 |
+
- **Description**: Mixing languages within the same utterance
|
| 102 |
+
- **Options**: [None, Occasional Words, Phrases, Frequent Switching]
|
| 103 |
+
- **Research Value**: Multilingual STT robustness evaluation
|
| 104 |
+
|
| 105 |
+
### Incomplete Sentences
|
| 106 |
+
- **Description**: Frequency of trailing off or interrupted thoughts
|
| 107 |
+
- **Options**: [None, Rare, Occasional, Frequent, Mostly Incomplete]
|
| 108 |
+
- **Research Value**: Natural speech pattern handling
|
| 109 |
+
|
| 110 |
+
### Number/Date Format Complexity
|
| 111 |
+
- **Description**: Complexity of numeric and temporal expressions
|
| 112 |
+
- **Examples**: "March 3rd" vs "3/3/24", "twenty-five" vs "25"
|
| 113 |
+
- **Options**: [Simple, Mixed Formats, Complex, Ambiguous]
|
| 114 |
+
- **Research Value**: Numeric transcription accuracy evaluation
|
| 115 |
+
|
| 116 |
+
## Implementation Priority
|
| 117 |
+
|
| 118 |
+
### High Priority
|
| 119 |
+
- Signal-to-Noise Ratio (auto-computed)
|
| 120 |
+
- Emotional State (manual annotation)
|
| 121 |
+
- Proper Noun Density (semi-automated)
|
| 122 |
+
|
| 123 |
+
### Medium Priority
|
| 124 |
+
- Device Movement
|
| 125 |
+
- Speech Hesitations
|
| 126 |
+
- Domain-Specific Vocabulary
|
| 127 |
+
|
| 128 |
+
### Low Priority (Research Interest)
|
| 129 |
+
- Frequency Response Issues
|
| 130 |
+
- Code-Switching
|
| 131 |
+
- Sentence Structure Complexity
|
| 132 |
+
|
| 133 |
+
## Notes
|
| 134 |
+
|
| 135 |
+
- Parameters marked as "auto-computed" can be calculated programmatically
|
| 136 |
+
- Manual annotation parameters should be added progressively
|
| 137 |
+
- Consider inter-annotator agreement studies for subjective parameters
|
| 138 |
+
- Some parameters may correlate strongly and could be combined or prioritized
|
dataset_metadata.json
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"id": "1",
|
| 4 |
+
"audio": "audio/1.mp3",
|
| 5 |
+
"ai_transcript": "aitranscripts/1.txt"
|
| 6 |
+
}
|
| 7 |
+
]
|
label_studio_config.xml
ADDED
|
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<View>
|
| 2 |
+
<Header value="Audio Transcription Correction"/>
|
| 3 |
+
<Audio name="audio" value="$audio"/>
|
| 4 |
+
|
| 5 |
+
<View style="margin-top: 20px;">
|
| 6 |
+
<Header value="AI Generated Title"/>
|
| 7 |
+
<TextArea name="ai_generated_title" toName="audio"
|
| 8 |
+
placeholder="Enter AI-generated title for this voice note..."
|
| 9 |
+
rows="1" editable="true"/>
|
| 10 |
+
</View>
|
| 11 |
+
|
| 12 |
+
<View style="margin-top: 20px;">
|
| 13 |
+
<Header value="Original AI Transcript (Uncorrected)"/>
|
| 14 |
+
<TextArea name="original_transcript" toName="audio"
|
| 15 |
+
placeholder="Paste the original AI transcript here..."
|
| 16 |
+
rows="4" editable="true"
|
| 17 |
+
style="background-color: #f5f5f5; padding: 10px; border-radius: 5px; font-style: italic;"/>
|
| 18 |
+
</View>
|
| 19 |
+
|
| 20 |
+
<View style="margin-top: 20px;">
|
| 21 |
+
<Header value="Corrected Transcript"/>
|
| 22 |
+
<TextArea name="corrected_transcript" toName="audio"
|
| 23 |
+
placeholder="Type the corrected transcription here..."
|
| 24 |
+
rows="6" editable="true"/>
|
| 25 |
+
</View>
|
| 26 |
+
|
| 27 |
+
<View style="margin-top: 20px;">
|
| 28 |
+
<Header value="Audio Challenges Present"/>
|
| 29 |
+
<Choices name="audio_challenges" toName="audio" choice="multiple" showInline="true">
|
| 30 |
+
<Choice value="Traffic Noise" hint="Road traffic sounds"/>
|
| 31 |
+
<Choice value="Audible Conversations" hint="Other people talking"/>
|
| 32 |
+
<Choice value="Outdoor Noise (General)" hint="Street/urban sounds"/>
|
| 33 |
+
<Choice value="Background Music" hint="Music playing"/>
|
| 34 |
+
<Choice value="Crying Baby" hint="Baby crying sounds"/>
|
| 35 |
+
</Choices>
|
| 36 |
+
</View>
|
| 37 |
+
|
| 38 |
+
<View style="margin-top: 20px;">
|
| 39 |
+
<Header value="Non-Speaker Content"/>
|
| 40 |
+
<Choices name="Non Speaker Content" toName="audio" choice="single" showInline="true">
|
| 41 |
+
<Choice value="Yes" hint="Speaker addresses someone else not intended for transcription"/>
|
| 42 |
+
</Choices>
|
| 43 |
+
</View>
|
| 44 |
+
|
| 45 |
+
<View style="margin-top: 20px;">
|
| 46 |
+
<Header value="Incidental Audio Pickup Source"/>
|
| 47 |
+
<Choices name="incidental_audio_source" toName="audio" choice="single" showInline="true">
|
| 48 |
+
<Choice value="Speaker"/>
|
| 49 |
+
<Choice value="Others"/>
|
| 50 |
+
</Choices>
|
| 51 |
+
</View>
|
| 52 |
+
|
| 53 |
+
<View style="margin-top: 20px;">
|
| 54 |
+
<Header value="Background Conversation Language"/>
|
| 55 |
+
<Choices name="background_conversation_language" toName="audio" choice="single" showInline="true">
|
| 56 |
+
<Choice value="English"/>
|
| 57 |
+
<Choice value="Hebrew"/>
|
| 58 |
+
<Choice value="Arabic"/>
|
| 59 |
+
<Choice value="French"/>
|
| 60 |
+
<Choice value="Russian"/>
|
| 61 |
+
</Choices>
|
| 62 |
+
</View>
|
| 63 |
+
|
| 64 |
+
<View style="margin-top: 20px;">
|
| 65 |
+
<Header value="Multilingual Transcript"/>
|
| 66 |
+
<Choices name="multilingual_transcript" toName="audio" choice="single" showInline="true">
|
| 67 |
+
<Choice value="True"/>
|
| 68 |
+
<Choice value="False"/>
|
| 69 |
+
</Choices>
|
| 70 |
+
</View>
|
| 71 |
+
|
| 72 |
+
<View style="margin-top: 20px;">
|
| 73 |
+
<Header value="Audible Conversation Languages"/>
|
| 74 |
+
<Choices name="conversation_languages" toName="audio" choice="multiple" showInline="true">
|
| 75 |
+
<Choice value="English"/>
|
| 76 |
+
<Choice value="Hebrew"/>
|
| 77 |
+
<Choice value="Arabic"/>
|
| 78 |
+
<Choice value="Russian"/>
|
| 79 |
+
<Choice value="French"/>
|
| 80 |
+
<Choice value="Amharic"/>
|
| 81 |
+
<Choice value="Unknown"/>
|
| 82 |
+
<Choice value="Other"/>
|
| 83 |
+
</Choices>
|
| 84 |
+
</View>
|
| 85 |
+
|
| 86 |
+
<View style="margin-top: 20px;">
|
| 87 |
+
<Header value="Recording Location"/>
|
| 88 |
+
<TextArea name="recording_place" toName="audio"
|
| 89 |
+
placeholder="Jerusalem, Israel"
|
| 90 |
+
rows="1" editable="true" defaultValue="Jerusalem"/>
|
| 91 |
+
</View>
|
| 92 |
+
|
| 93 |
+
<View style="margin-top: 20px;">
|
| 94 |
+
<Header value="Recording Environment"/>
|
| 95 |
+
<Choices name="recording_environment" toName="audio" choice="single" showInline="true">
|
| 96 |
+
<Choice value="Indoor"/>
|
| 97 |
+
<Choice value="Outdoor"/>
|
| 98 |
+
</Choices>
|
| 99 |
+
</View>
|
| 100 |
+
|
| 101 |
+
<View style="margin-top: 20px;">
|
| 102 |
+
<Header value="Microphone Source"/>
|
| 103 |
+
<Choices name="microphone_source" toName="audio" choice="single" showInline="true">
|
| 104 |
+
<Choice value="Phone Internal Mic" hint="Built-in phone microphone"/>
|
| 105 |
+
<Choice value="Bluetooth Mic" hint="Bluetooth-connected microphone"/>
|
| 106 |
+
<Choice value="Desktop Mic" hint="Computer microphone"/>
|
| 107 |
+
</Choices>
|
| 108 |
+
</View>
|
| 109 |
+
|
| 110 |
+
<View style="margin-top: 20px;">
|
| 111 |
+
<Header value="Microphone Type"/>
|
| 112 |
+
<Choices name="microphone_type" toName="audio" choice="single" showInline="true">
|
| 113 |
+
<Choice value="Earpiece" hint="Headphones/earbuds with mic"/>
|
| 114 |
+
<Choice value="Lavalier" hint="Clip-on microphone"/>
|
| 115 |
+
<Choice value="Gooseneck" hint="Flexible desktop microphone"/>
|
| 116 |
+
<Choice value="Desktop" hint="Standard desktop microphone"/>
|
| 117 |
+
</Choices>
|
| 118 |
+
</View>
|
| 119 |
+
|
| 120 |
+
<View style="margin-top: 20px;">
|
| 121 |
+
<Header value="Microphone Model"/>
|
| 122 |
+
<Choices name="microphone_model" toName="audio" choice="single" showInline="true">
|
| 123 |
+
<Choice value="OnePlus Nord 3 Internal" hint="Built-in phone microphone"/>
|
| 124 |
+
<Choice value="Poly 5200" hint="Poly 5200 Bluetooth microphone"/>
|
| 125 |
+
<Choice value="ATR 4697" hint="ATR 4697 professional microphone"/>
|
| 126 |
+
<Choice value="Other" hint="Other microphone model"/>
|
| 127 |
+
</Choices>
|
| 128 |
+
</View>
|
| 129 |
+
|
| 130 |
+
<View style="margin-top: 20px;">
|
| 131 |
+
<Header value="Bluetooth Codec"/>
|
| 132 |
+
<Choices name="bluetooth_codec" toName="audio" choice="single" showInline="true">
|
| 133 |
+
<Choice value="SBC" hint="Standard Bluetooth codec"/>
|
| 134 |
+
<Choice value="AAC" hint="Advanced Audio Coding"/>
|
| 135 |
+
<Choice value="aptX" hint="Qualcomm aptX codec"/>
|
| 136 |
+
<Choice value="aptX HD" hint="High-definition aptX codec"/>
|
| 137 |
+
<Choice value="LDAC" hint="Sony LDAC high-quality codec"/>
|
| 138 |
+
<Choice value="LC3" hint="Low Complexity Communication Codec"/>
|
| 139 |
+
<Choice value="N/A" hint="Not applicable (wired/internal mic)"/>
|
| 140 |
+
<Choice value="Unknown" hint="Codec information unavailable"/>
|
| 141 |
+
</Choices>
|
| 142 |
+
</View>
|
| 143 |
+
|
| 144 |
+
<View style="margin-top: 20px;">
|
| 145 |
+
<Header value="Audio Quality"/>
|
| 146 |
+
<Rating name="audio_quality" toName="audio"
|
| 147 |
+
maxRating="5" icon="star" size="medium"/>
|
| 148 |
+
</View>
|
| 149 |
+
|
| 150 |
+
<View style="margin-top: 20px;">
|
| 151 |
+
<Header value="Voice Note Content Type"/>
|
| 152 |
+
<Choices name="content_type" toName="audio" choice="multiple" showInline="false">
|
| 153 |
+
<Choice value="Blog Outline" hint="Structure or ideas for a blog post"/>
|
| 154 |
+
<Choice value="Email Draft" hint="Content or ideas for an email"/>
|
| 155 |
+
<Choice value="Calendar Appointment" hint="Details for scheduling"/>
|
| 156 |
+
<Choice value="Note To Self" hint="Personal thoughts or information"/>
|
| 157 |
+
<Choice value="Reminder" hint="Things to remember"/>
|
| 158 |
+
<Choice value="Task List" hint="List of tasks or action items"/>
|
| 159 |
+
<Choice value="Grocery List" hint="Items to buy for food"/>
|
| 160 |
+
<Choice value="Shopping List (Other)" hint="Items to buy (non-grocery)"/>
|
| 161 |
+
<Choice value="Online Shopping" hint="Items researched or purchased online"/>
|
| 162 |
+
<Choice value="Stack Research" hint="Technical or domain research"/>
|
| 163 |
+
<Choice value="AI Prompt" hint="Instructions or queries for AI"/>
|
| 164 |
+
<Choice value="System Prompt" hint="Configuration or setup instructions for systems"/>
|
| 165 |
+
</Choices>
|
| 166 |
+
</View>
|
| 167 |
+
|
| 168 |
+
<View style="margin-top: 20px;">
|
| 169 |
+
<Header value="Entities Present in Note"/>
|
| 170 |
+
<Choices name="entities_present" toName="audio" choice="multiple" showInline="true">
|
| 171 |
+
<Choice value="Dates" hint="Specific dates or time references"/>
|
| 172 |
+
<Choice value="Persons" hint="Names of people"/>
|
| 173 |
+
<Choice value="Placenames" hint="Geographic locations or places"/>
|
| 174 |
+
<Choice value="Email Addresses" hint="Email addresses mentioned"/>
|
| 175 |
+
<Choice value="Blog Title" hint="Blog or article titles"/>
|
| 176 |
+
<Choice value="Acronym" hint="Acronyms or abbreviations"/>
|
| 177 |
+
<Choice value="Organisations" hint="Company or organization names"/>
|
| 178 |
+
</Choices>
|
| 179 |
+
</View>
|
| 180 |
+
|
| 181 |
+
</View>
|
parameters.md
ADDED
|
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Annotation Parameters
|
| 2 |
+
|
| 3 |
+
This document outlines the key parameters and aspects to be annotated within the voice notes dataset. Annotations are added progressively as part of the review process.
|
| 4 |
+
|
| 5 |
+
## Audio Quality
|
| 6 |
+
|
| 7 |
+
- **Clarity**: How clear is the primary speaker's voice?
|
| 8 |
+
- Options: [Very Clear, Clear, Somewhat Muffled, Muffled, Very Muffled]
|
| 9 |
+
- **Background Noise Level**: The overall level of background noise.
|
| 10 |
+
- Options: [None, Low, Moderate, High, Very High]
|
| 11 |
+
- **Noise Type**: The type of background noise present.
|
| 12 |
+
- Options: [None, Music, Crowd, Traffic, Nature, White Noise, Other (Specify)]
|
| 13 |
+
- **Reverberation**: The level of echo or reverberation.
|
| 14 |
+
- Options: [None, Low, Moderate, High]
|
| 15 |
+
|
| 16 |
+
## Speaker Characteristics
|
| 17 |
+
|
| 18 |
+
- **Primary Speaker Dominance**: How much of the audio is dominated by the primary speaker?
|
| 19 |
+
- Options: [>90%, 70-90%, 50-70%, 30-50%, <30%]
|
| 20 |
+
- **Number of Speakers**: An estimate of how many distinct speakers are present.
|
| 21 |
+
- Options: [1, 2, 3, 4, 5+]
|
| 22 |
+
- **Speaker Overlap**: How much do speakers talk over each other?
|
| 23 |
+
- Options: [None, Minimal, Moderate, Frequent]
|
| 24 |
+
|
| 25 |
+
## Transcription Quality
|
| 26 |
+
|
| 27 |
+
- **AI Transcript Accuracy (Overall)**: A general assessment of the Voicenotes.com STT accuracy for the entire note.
|
| 28 |
+
- Options: [Very High (>95%), High (90-95%), Moderate (80-90%), Low (70-80%), Very Low (<70%)]
|
| 29 |
+
- **Specific Error Types**: Check all that apply.
|
| 30 |
+
- [ ] Incorrect Words
|
| 31 |
+
- [ ] Missed Words/Phrases
|
| 32 |
+
- [ ] Added Words/Phrases
|
| 33 |
+
- [ ] Punctuation Errors
|
| 34 |
+
- [ ] Capitalization Errors
|
| 35 |
+
- [ ] Speaker Labeling Errors (if multiple speakers)
|
| 36 |
+
- **Difficult Segments**: Note timecodes of segments that are particularly difficult for STT.
|
| 37 |
+
|
| 38 |
+
## Context & Content
|
| 39 |
+
|
| 40 |
+
- **Primary Topic**: The main subject of the voice note.
|
| 41 |
+
- (Free text or predefined list based on your notes)
|
| 42 |
+
- **Language**: The primary language of the audio.
|
| 43 |
+
- (E.g., English, Spanish, etc.)
|
| 44 |
+
- **Technical Jargon**: Is there specialized terminology?
|
| 45 |
+
- Options: [None, Low, Moderate, High]
|
| 46 |
+
|
| 47 |
+
## Recording Details
|
| 48 |
+
|
| 49 |
+
- **Recording Location**: General location where the note was recorded.
|
| 50 |
+
- (Text field, default: Jerusalem)
|
| 51 |
+
- **Recording Environment**: Whether the recording was made indoors or outdoors.
|
| 52 |
+
- Options: [Indoor, Outdoor]
|
| 53 |
+
- **Microphone Source**: The general source of the microphone used.
|
| 54 |
+
- Options: [Phone Internal Mic, Bluetooth Mic, Desktop Mic]
|
| 55 |
+
- **Microphone Type**: The specific type of microphone used.
|
| 56 |
+
- Options: [Earpiece, Lavalier, Gooseneck, Desktop]
|
| 57 |
+
|
| 58 |
+
## Content Type
|
| 59 |
+
|
| 60 |
+
- **Voice Note Content Type**: The primary purpose or type of content captured in the note. (Defined in `label_studio_config.xml`)
|
| 61 |
+
- Options:
|
| 62 |
+
- Blog Outline
|
| 63 |
+
- Email Draft
|
| 64 |
+
- Calendar Appointment
|
| 65 |
+
- Note To Self
|
| 66 |
+
- Reminder
|
| 67 |
+
- Task List
|
| 68 |
+
- Grocery List
|
| 69 |
+
- Shopping List (Other)
|
| 70 |
+
- Online Shopping
|
| 71 |
+
- Stack Research
|
| 72 |
+
- AI Prompt
|
| 73 |
+
- System Prompt
|
| 74 |
+
|
| 75 |
+
## Annotation State
|
| 76 |
+
|
| 77 |
+
- **Corrected**: Whether the AI transcript has been manually corrected.
|
| 78 |
+
- Options: [Yes, No, Partially]
|
| 79 |
+
- **Annotator Notes**: Any additional observations or comments.
|
| 80 |
+
|
| 81 |
+
## Auto-Computed Fields
|
| 82 |
+
|
| 83 |
+
The following metrics will be automatically calculated and added to the dataset metadata:
|
| 84 |
+
|
| 85 |
+
- **Run Time (of file)**: Duration of the audio file.
|
| 86 |
+
- **Word Count (of transcript)**: Total number of words in the transcript.
|
| 87 |
+
- **Character Count (of transcript)**: Total number of characters in the transcript.
|
| 88 |
+
- **Word Error Rate (of original AI transcript)**: Calculated WER comparing the AI transcript to the corrected transcript.
|
| 89 |
+
- **Estimated Speaker WPM**: Estimated words per minute for the primary speaker.
|
| 90 |
+
- **Speaker WPM Classification**: Classification of speaker speed (1-5).
|
| 91 |
+
- **Average dB Level**: Average decibel level of the audio.
|
| 92 |
+
|
| 93 |
+
## Related Datasets
|
| 94 |
+
|
| 95 |
+
This dataset is part of a broader STT fine-tuning project that will produce multiple related datasets:
|
| 96 |
+
|
| 97 |
+
- **Voice Note Audio** (this dataset): Public dataset on Hugging Face for real-world STT evaluation
|
| 98 |
+
- **Basic STT Evaluation for Synthetic Voice Notes**: Complementary dataset for controlled evaluation scenarios
|
preprocessing/README.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Preprocessing Workflow
|
| 2 |
+
|
| 3 |
+
This folder contains the preprocessing pipeline for adding new voice notes to the dataset.
|
| 4 |
+
|
| 5 |
+
## Folder Structure
|
| 6 |
+
|
| 7 |
+
- `raw_audio/` - Raw recordings as they come from your device
|
| 8 |
+
- `transcripts/` - AI-generated transcripts for the raw audio files
|
| 9 |
+
- `queue/` - Processed audio files ready to be added to the main dataset
|
| 10 |
+
|
| 11 |
+
## Workflow
|
| 12 |
+
|
| 13 |
+
1. Add new recordings to `raw_audio/`
|
| 14 |
+
2. Generate transcripts and place them in `transcripts/` (filenames should match audio files)
|
| 15 |
+
3. Process/clean audio as needed and move to `queue/`
|
| 16 |
+
4. Run `python3 preprocessing/move_to_dataset.py` to add new files to the main dataset
|
| 17 |
+
5. The script will automatically:
|
| 18 |
+
- Assign new IDs to files
|
| 19 |
+
- Move files to the appropriate directories
|
| 20 |
+
- Regenerate the task list and dataset metadata
|
| 21 |
+
|
| 22 |
+
This keeps the main dataset organized while providing a staging area for new data.
|
preprocessing/move_to_dataset.py
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Script to move preprocessed data to main dataset
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import os
|
| 7 |
+
import shutil
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
import json
|
| 10 |
+
|
| 11 |
+
def move_preprocessed_data():
|
| 12 |
+
"""Move preprocessed data to main dataset"""
|
| 13 |
+
# Create directories if they don't exist
|
| 14 |
+
os.makedirs("audio", exist_ok=True)
|
| 15 |
+
os.makedirs("aitranscripts", exist_ok=True)
|
| 16 |
+
|
| 17 |
+
# Get the next available ID
|
| 18 |
+
existing_audio = list(Path("audio").glob("*"))
|
| 19 |
+
next_id = 1
|
| 20 |
+
if existing_audio:
|
| 21 |
+
# Extract numbers from existing files and find the max
|
| 22 |
+
ids = []
|
| 23 |
+
for f in existing_audio:
|
| 24 |
+
try:
|
| 25 |
+
ids.append(int(f.stem))
|
| 26 |
+
except ValueError:
|
| 27 |
+
pass
|
| 28 |
+
if ids:
|
| 29 |
+
next_id = max(ids) + 1
|
| 30 |
+
|
| 31 |
+
# Move files from preprocessing queue
|
| 32 |
+
queue_files = list(Path("preprocessing/queue").glob("*"))
|
| 33 |
+
|
| 34 |
+
if not queue_files:
|
| 35 |
+
print("No files in preprocessing queue")
|
| 36 |
+
return
|
| 37 |
+
|
| 38 |
+
moved_files = []
|
| 39 |
+
for queue_file in queue_files:
|
| 40 |
+
if queue_file.suffix in ['.mp3', '.wav']:
|
| 41 |
+
# Move audio file
|
| 42 |
+
new_name = f"{next_id}{queue_file.suffix}"
|
| 43 |
+
dest_path = Path("audio") / new_name
|
| 44 |
+
shutil.move(str(queue_file), str(dest_path))
|
| 45 |
+
|
| 46 |
+
# Look for corresponding transcript
|
| 47 |
+
transcript_file = Path("preprocessing/transcripts") / f"{queue_file.stem}.txt"
|
| 48 |
+
if transcript_file.exists():
|
| 49 |
+
dest_transcript = Path("aitranscripts") / f"{next_id}.txt"
|
| 50 |
+
shutil.move(str(transcript_file), str(dest_transcript))
|
| 51 |
+
else:
|
| 52 |
+
# Create empty transcript file
|
| 53 |
+
dest_transcript = Path("aitranscripts") / f"{next_id}.txt"
|
| 54 |
+
dest_transcript.write_text("")
|
| 55 |
+
|
| 56 |
+
moved_files.append((new_name, next_id))
|
| 57 |
+
next_id += 1
|
| 58 |
+
|
| 59 |
+
if moved_files:
|
| 60 |
+
print(f"Moved {len(moved_files)} files to main dataset:")
|
| 61 |
+
for filename, file_id in moved_files:
|
| 62 |
+
print(f" - {filename} (ID: {file_id})")
|
| 63 |
+
|
| 64 |
+
# Regenerate task list and dataset metadata
|
| 65 |
+
print("\nRegenerating task list and dataset metadata...")
|
| 66 |
+
os.system("python3 setup_annotation.py")
|
| 67 |
+
else:
|
| 68 |
+
print("No audio files found in preprocessing queue")
|
| 69 |
+
|
| 70 |
+
if __name__ == "__main__":
|
| 71 |
+
move_preprocessed_data()
|
setup_annotation.py
ADDED
|
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Simple annotation setup for Voice Notes dataset
|
| 4 |
+
Creates task list from audio files and AI transcripts
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import json
|
| 8 |
+
import os
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
|
| 11 |
+
def create_task_list():
|
| 12 |
+
"""Create annotation task list"""
|
| 13 |
+
|
| 14 |
+
# Create annotations directory
|
| 15 |
+
os.makedirs("annotations", exist_ok=True)
|
| 16 |
+
|
| 17 |
+
# Find all audio files
|
| 18 |
+
audio_files = list(Path("audio").glob("*.mp3"))
|
| 19 |
+
audio_files.extend(list(Path("audio").glob("*.wav")))
|
| 20 |
+
|
| 21 |
+
tasks = []
|
| 22 |
+
dataset_metadata = []
|
| 23 |
+
|
| 24 |
+
for audio_file in sorted(audio_files):
|
| 25 |
+
file_id = audio_file.stem
|
| 26 |
+
transcript_file = Path("aitranscripts") / f"{file_id}.txt"
|
| 27 |
+
|
| 28 |
+
# Read AI transcript
|
| 29 |
+
ai_transcript = ""
|
| 30 |
+
if transcript_file.exists():
|
| 31 |
+
ai_transcript = transcript_file.read_text().strip()
|
| 32 |
+
|
| 33 |
+
task = {
|
| 34 |
+
"id": file_id,
|
| 35 |
+
"audio_path": str(audio_file),
|
| 36 |
+
"ai_transcript": ai_transcript,
|
| 37 |
+
"corrected_transcript": "",
|
| 38 |
+
"parameters": {
|
| 39 |
+
"speaker_info": "",
|
| 40 |
+
"audio_quality": "",
|
| 41 |
+
"environment": "",
|
| 42 |
+
"corrections_needed": []
|
| 43 |
+
},
|
| 44 |
+
"status": "pending"
|
| 45 |
+
}
|
| 46 |
+
tasks.append(task)
|
| 47 |
+
|
| 48 |
+
# Also create dataset metadata with all fields
|
| 49 |
+
metadata_entry = {
|
| 50 |
+
"id": file_id,
|
| 51 |
+
"audio": str(audio_file),
|
| 52 |
+
"ai_transcript": ai_transcript,
|
| 53 |
+
"corrected_transcript": "",
|
| 54 |
+
"audio_challenges": [],
|
| 55 |
+
"non_speaker_content": "",
|
| 56 |
+
"conversation_languages": [],
|
| 57 |
+
"recording_place": "",
|
| 58 |
+
"microphone_type": "",
|
| 59 |
+
"recording_environment": "",
|
| 60 |
+
"audio_quality": 0,
|
| 61 |
+
"content_type": []
|
| 62 |
+
}
|
| 63 |
+
dataset_metadata.append(metadata_entry)
|
| 64 |
+
|
| 65 |
+
# Save task list
|
| 66 |
+
with open("annotations/task_list.json", "w") as f:
|
| 67 |
+
json.dump(tasks, f, indent=2)
|
| 68 |
+
|
| 69 |
+
# Save dataset metadata
|
| 70 |
+
with open("dataset_metadata.json", "w") as f:
|
| 71 |
+
json.dump(dataset_metadata, f, indent=2)
|
| 72 |
+
|
| 73 |
+
print(f"Created {len(tasks)} annotation tasks")
|
| 74 |
+
for task in tasks:
|
| 75 |
+
print(f"- {task['id']}: {task['audio_path']}")
|
| 76 |
+
|
| 77 |
+
return len(tasks)
|
| 78 |
+
|
| 79 |
+
def prepare_for_hf():
|
| 80 |
+
"""Prepare completed annotations for HF dataset"""
|
| 81 |
+
try:
|
| 82 |
+
from datasets import Dataset, Audio
|
| 83 |
+
|
| 84 |
+
with open("annotations/task_list.json") as f:
|
| 85 |
+
tasks = json.load(f)
|
| 86 |
+
|
| 87 |
+
# Get completed tasks
|
| 88 |
+
completed = [t for t in tasks if t["status"] == "completed"]
|
| 89 |
+
|
| 90 |
+
if not completed:
|
| 91 |
+
print("No completed annotations found")
|
| 92 |
+
return None
|
| 93 |
+
|
| 94 |
+
# Format for HF
|
| 95 |
+
hf_data = []
|
| 96 |
+
for task in completed:
|
| 97 |
+
hf_data.append({
|
| 98 |
+
"audio": task["audio_path"],
|
| 99 |
+
"ai_transcript": task["ai_transcript"],
|
| 100 |
+
"corrected_transcript": task["corrected_transcript"],
|
| 101 |
+
"audio_challenges": task.get("audio_challenges", []),
|
| 102 |
+
"non_speaker_content": task.get("non_speaker_content", ""),
|
| 103 |
+
"conversation_languages": task.get("conversation_languages", []),
|
| 104 |
+
"recording_place": task.get("recording_place", ""),
|
| 105 |
+
"microphone_type": task.get("microphone_type", ""),
|
| 106 |
+
"recording_environment": task.get("recording_environment", ""),
|
| 107 |
+
"audio_quality": task.get("audio_quality", 0),
|
| 108 |
+
"content_type": task.get("content_type", [])
|
| 109 |
+
})
|
| 110 |
+
|
| 111 |
+
dataset = Dataset.from_list(hf_data)
|
| 112 |
+
dataset = dataset.cast_column("audio", Audio())
|
| 113 |
+
|
| 114 |
+
# Save dataset
|
| 115 |
+
dataset.save_to_disk("annotations/hf_dataset")
|
| 116 |
+
print(f"HF dataset saved with {len(completed)} completed annotations")
|
| 117 |
+
|
| 118 |
+
return dataset
|
| 119 |
+
|
| 120 |
+
except ImportError:
|
| 121 |
+
print("Install datasets: pip install datasets")
|
| 122 |
+
return None
|
| 123 |
+
|
| 124 |
+
if __name__ == "__main__":
|
| 125 |
+
create_task_list()
|
| 126 |
+
print("\nNext steps:")
|
| 127 |
+
print("1. Edit annotations/task_list.json")
|
| 128 |
+
print("2. Add corrected transcripts and parameters")
|
| 129 |
+
print("3. Set status to 'completed' when done")
|
| 130 |
+
print("4. Run prepare_for_hf() to create HF dataset")
|
transcripts/ground_truths/1.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
This is going to be version 2 refactor of the Hugging Face voice note training data set because I've learned tonight how to do data annotation properly using Label Studio so it's going to be a start from scratch in the interest of making it much easier to maintain this data set and upload additional notes as I go along.
|
transcripts/uncorrected/1.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
This is going to be version 2 refactor of the Hugging Face voice note training data set because I've learned tonight how to do data annotation properly using Label Studio so it's going to be a start from scratch in the interest of making it much easier to maintain this data set and upload additional notes as I go along.
|
transcripts/uncorrected/10.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'm going to add the workflow to Contentful. I'm just going to document the one I'm submitting to the creator library, which is one away from the threshold for monetization. And then that can be just to prove it's got the code sample. We can actually do code samples from the JSON and to test out the main code block and specific areas and make sure I'll make sure as well the JSON is a type of supported language in the syntax highlighting, and that will validate that it's working.
|
| 2 |
+
|
| 3 |
+
It might just be more efficient to say that what's been, forget the importing the old posts. Or I can just gradually update them going forward. And I wanna create a tag as well for N8N workflows. And then so that I can update this link from the agent website that I'm building.
|
| 4 |
+
|
| 5 |
+
And I might ask N8N. I read the guidelines now for the workflows if they might would mind sharing for credential sanitization, redacting things beyond API keys. If you have, for example, your name in an agent configuration or an email, I'm manually editing them. And then emails, what is their preference regarding that? Is it okay to use things like redacted placeholders or dummy emails? What's their preference for how to submit them?
|
| 6 |
+
|
| 7 |
+
And of course, it's important as well to check the workflows before submitting them because the API keys shouldn't be exposed because they'd be linked to its credentials. We just want to make sure that there's nothing else that, for example, in the payloads and webhooks, if those might reveal any IDs or anything else. At the moment, they're worth manually inspecting.
|
transcripts/uncorrected/11.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
One final set of adjustments for V4 is the ability to adjust the sensitivity for both the cry detection and the motion detection.
|
| 2 |
+
|
| 3 |
+
And likewise, like the other settings we had, which was saved in our local memory, these should also be saved in local memory.
|
| 4 |
+
|
| 5 |
+
Please try to find a local memory solution that is robust and will of course persist across reboots.
|
| 6 |
+
|
| 7 |
+
If the current one is not optimal, then let's find a better way.
|
| 8 |
+
|
| 9 |
+
However, really this can persist in the current Linux system.
|
transcripts/uncorrected/12.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I'd like to get your recommendations for an AI choice of large language model. That would be efficient for what you might call kind of fairly simple but repetitive tasks in the sense of, let's say, for example, I'm running an agent that's going to process my voice notes and it's going to have a cleanup prompt. And so just cleaning up the format for to make it a bit more coherent basically and then saving that somewhere. So this might be run 50 times a day and I don't want to, you know, you rack up huge API costs on doing so.
|
| 2 |
+
|
| 3 |
+
Traditionally, the use of a very strong contender for this kind of work was Turbo 3.5 and so on. But I feel like that's a little bit, I personally feel like there's no need to go quite that back far in the models. There are more modern up-to-date cost-efficient models. I'd be interested to know what the current time in terms of what OpenAI has, what maybe Cohere has, or any other LLMs that are really kind of optimized for what you might call, I think there's a big difference between the type of prompting that you might do in a conversational interface where you're looking for a lot of detail and interactivity versus agents for this kind of instructional tasks.
|
| 4 |
+
|
| 5 |
+
The simple answer is it's an instructional fine-tune and so on. But I see that instructional and conversational are converging and there's less models now being explicitly marketed as instructional. So I'd just like to get your take on that.
|
transcripts/uncorrected/13.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I have an idea for an AI tool for Ubuntu and Linux grounded in the principle that Linux systems provide verbose logs, but most people don't make use of them because they're so verbose and overwhelming and complicated. And I think that this would actually be a superb use for AI, which can process this and make sense of this large swathe of information that generates every second the Linux system is running, starting from the boot.
|
| 2 |
+
|
| 3 |
+
The objective would be to provide something like proactive maintenance, by which rather than waiting for problems to bring down the computer or for hardware to crash, viewing every time that the system boots and runs as an opportunity to listen to catch these entries and remediate before they cascade or cause outright failure. So the idea would be something like a process or an agent that runs three minutes into the boot sequence after the boot. So the user gets into the UI, hopefully, and it captures the first three minutes of logs from the boot sequence, brings that into a repository, and then from that, it parses that and analyzes what the logs are saying, and if there are anything, if anything is there for remediation.
|
| 4 |
+
|
| 5 |
+
The user could edit the length of time parameter. So sometimes it might be advantageous to run it 15 minutes or even an hour or have it running continuously. But I think as a proof of concept, it'd be easiest to start with those first three minutes into the boot where a lot of these kind of startup errors might manifest. And of course, the target would be to provide the user with remediation steps. And the idea would be that if they do this every once a week or even every three days, they should hopefully get to the point where the system is really clean and there's nothing to be fixed; it's in a good working condition.
|
transcripts/uncorrected/2.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So, I have some IP cameras and I've been trying to figure out for a while how to access them, learning about it. Fortunately, the TP-Link cameras and Reolink give you an RTSP stream, but RTSP is kind of a tricky one to work with because you don't have an NVR. If you're not on the trying to forward it, it won't forward very easily out of the network. But if you are, if you can connect to it, it's great. I just don't—I haven't had success with any of the major NVRs. I'm not a huge fan of Frigate or most of them, to be honest. I feel like most of them are really actually just overcomplicating what needs to be done. They need to pull in a stream from these local cameras, make it accessible, and so on.
|
| 2 |
+
|
| 3 |
+
Custom App with AI. But I realized that the first thing that needs to be done to get a stream out to the world is to translate from RTSP into something like HLS restreaming, basically. So, what I've done is created a deployed restreamer on the local server. That is a starting point. The raw camera streams can be served in a format that's much easier to connect to, whether from a web browser, from an Android, whatever.
|
| 4 |
+
|
| 5 |
+
My question is, once you've gone through that process with Restreamer, and you want to do the next thing, which is, after you've got a good local setup, you probably want to figure out some way of accessing these cameras remotely, so you want to tunnel them. What's the most popular way to do that? Do people do stuff like Cloudflare? Or is there a more commonplace way to safely route IP cameras out of the local network so that you can monitor them remotely and whatever you're using for that?
|
transcripts/uncorrected/3.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Okay, so something, I hope this is correct. In Homebox, as you look at the ACID IDs, the UUIDs I mean, and how many digits there are, looking at the data volume as it populates, the path appears to be ACID UUID forward slash documents forward slash document UUID dot the file extension.
|
| 2 |
+
|
| 3 |
+
So if that's the case, it should actually be fairly straightforward to make sure that the attachments remain attached to the right assets, so long as the UUIDs are preserved. So that's one less concern in the migrations.
|
transcripts/uncorrected/4.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I'm just making a note that the stack component which I'm adding into my system today are some changes to the Docker network. Both intended practically as, you know, try this out and educationally as much of what I'm doing in N8N at the moment is. And that is adding in a memory layer.
|
| 2 |
+
|
| 3 |
+
I've looked at vector databases, I've looked at storage, all the other moving pieces that agents need to work and the one that I haven't really actually got my hands dirty with yet is persistent memory. Which on the one hand you could say it's just a glorified different form of storage. It's clearly emerging with agents, very relevant to know this part of the stack.
|
| 4 |
+
|
| 5 |
+
So I'm going to try to see what I can get on, give it a go, set it up, and may not actually use it for a while, if at all. But I'm going with mem0 if Windsurf can get it on. If not, I can try to see if there's anything else that is open source in the memory layer.
|
transcripts/uncorrected/5.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I have an Android tablet that I want to add to a Snap server. And it does, they're using the open source client called, I think it's called Snapdroid, and it can run the client server. And the issue is that the other Snap clients work very well. But this is very fragmented.
|
| 2 |
+
|
| 3 |
+
So there's two, I assume, it's because it's a low end Android tablet. And the network connectivity is poor. The tablet is right next to the server box, however.
|
| 4 |
+
|
| 5 |
+
So I'm wondering if it's possible to do a USB-C Ethernet adapter, so that the tablet actually gets Ethernet connectivity? Or is it just the case that the streaming to the Android tablet is going to be an issue?
|
| 6 |
+
|
| 7 |
+
Or is there any better way to integrate between the Snap server and an Android tablet besides the workaround Snap client?
|
transcripts/uncorrected/6.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
In Contentful I will add code samples as a content model.
|
| 2 |
+
|
| 3 |
+
So we'll have the code block and then we'll have the language we can choose from the most common ones: YAML, Python, JavaScript, Bash, maybe plain text, Markdown.
|
| 4 |
+
|
| 5 |
+
It may be a little bit more cumbersome, but that would probably ensure very robust rendering if they're added as inline elements instead of having to rely on the kind of handling logic to do that.
|
transcripts/uncorrected/7.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I need to add to the green lights and green, orange and red, all three of the alert scripts, that they should turn off the lights at the end because they are based on a number of iterations, which doesn't always fall out to be as it's planned. So the script should have an end that it turns off at the time.
|
| 2 |
+
|
| 3 |
+
It'll be interesting to see if there's any way to get it connected to as a server for WinSurf to act upon. I don't know if it was mounted, if that could be the case.
|
| 4 |
+
|
| 5 |
+
The other thing I wanted was a green, orange, green and red intermittent strobe. I don't know if for signaling lights, the actual way they work is, I guess they have to, it couldn't be any other way through power control on off.
|
| 6 |
+
|
| 7 |
+
But in any event, I should ask it what the standard thing is because I think one second, one second. I feel like it's not quite that, but there's probably a standard in milliseconds for various types of alternating flash signals.
|
| 8 |
+
|
| 9 |
+
And they should be standardized on that because I'm not sure that they even do the flashing anymore. I think they should flash, but only for like 2 minutes on each condition.
|
transcripts/uncorrected/8.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I have a Next.js website and the backend is being powered by Contentful. This was quite challenging to get set up. I'm wondering if there's... I want to add like preview images into the body of the text.
|
| 2 |
+
|
| 3 |
+
Like if I want to link to a URL and have it come up with that kind of nice rendering that you get from... I don't know exactly what the library is, but it's like Medium has it.
|
| 4 |
+
|
| 5 |
+
It's been around for a while. It embeds a preview into the text itself.
|
| 6 |
+
|
| 7 |
+
Is the easiest way to do that? I don't know how to add them in the blog post in Contentful so that they'll be picked up with that kind of style when they render.
|
| 8 |
+
|
| 9 |
+
Is there a specific way to do it there or is it something I want to do on the front end?
|
transcripts/uncorrected/9.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
I want to add to the Linux wrapper. I might read it, rotate the voicemail and see if they have any thoughts about me actually sharing this icon on a semi-official basis. So I'm going to add their icon and ask that it has a dockable thing and an undockable thing. And that's number one.
|
| 2 |
+
|
| 3 |
+
Number two is setting up the medical repositories to experimentally big project repositories for the agent using agent that way.
|