danielrosehill's picture
commit
d395fcc
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
- he
tags:
- speech-to-text
- stt
- evaluation
- technical-vocabulary
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/*
---
# Small STT Eval Audio Dataset
A small speech-to-text evaluation dataset containing 92 audio samples with ground truth transcriptions. Designed for evaluating STT systems on technical vocabulary, code-switching (English/Hebrew), and various speaking styles.
## Dataset Description
This dataset contains audio recordings with accompanying transcriptions across multiple categories:
| Category | Count | Description |
|----------|-------|-------------|
| tech_github | 5 | GitHub-related technical vocabulary |
| tech_huggingface | 4 | Hugging Face platform terminology |
| tech_docker | 5 | Docker and containerization terms |
| hebrew_daily | 10 | English with Hebrew words (daily life) |
| hebrew_food | 3 | English with Hebrew food terms |
| ai_ml | 9 | AI/ML technical vocabulary |
| local_tools | 8 | Local development tools |
| conversational | 10 | Casual conversational speech |
| narrative | 6 | Narrative/storytelling style |
| instructions | 7 | Instructional content |
| tech_linux | 6 | Linux system administration |
| tech_api | 4 | API and web services |
| tech_python | 5 | Python programming |
| mixed_workflow | 5 | Mixed technical workflows |
| mixed_locale | 2 | Mixed locale content |
| tech_web | 2 | Web development |
| tech_data | 1 | Data processing |
## Audio Specifications
- **Format**: WAV (PCM signed 16-bit little-endian)
- **Sample Rate**: 16kHz
- **Channels**: Mono
- **Average Duration**: ~5-10 seconds per sample
## Dataset Structure
```
data/
├── metadata.csv
├── 001_tech_github.wav
├── 002_tech_github.wav
└── ...
```
The `metadata.csv` contains:
- `file_name`: Audio filename
- `transcription`: Ground truth transcription
- `category`: Content category
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("danielrosehill/Small-STT-Eval-Audio-Dataset")
# Access a sample
sample = dataset["train"][0]
print(sample["transcription"])
# Play audio: sample["audio"]
```
## Intended Use
This dataset is intended for:
- Evaluating STT model accuracy on technical vocabulary
- Testing code-switching (English/Hebrew) recognition
- Benchmarking STT systems on varied speaking styles
- Development and testing of speech recognition pipelines
## Recommended Evaluation Packages
For WER (Word Error Rate) evaluation, we recommend using text normalization to handle variations in number formatting, punctuation, and casing:
- **[whisper-normalizer](https://pypi.org/project/whisper-normalizer/)**: Text normalization for STT evaluation (handles "3000" vs "three thousand", punctuation, casing)
- **[werpy](https://pypi.org/project/werpy/)**: WER calculation with detailed error analysis
```python
from whisper_normalizer.english import EnglishTextNormalizer
from werpy import wer
normalizer = EnglishTextNormalizer()
# Normalize both reference and hypothesis before comparison
reference = normalizer(ground_truth)
hypothesis = normalizer(model_output)
error_rate = wer(reference, hypothesis)
```
## Limitations
- Small dataset size (92 samples)
- Single speaker
- Controlled recording environment
- Limited Hebrew vocabulary (loan words only, not full Hebrew speech)
## License
CC-BY-4.0