File size: 3,487 Bytes
93fc774
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d395fcc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93fc774
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
language:
  - en
  - he
tags:
  - speech-to-text
  - stt
  - evaluation
  - technical-vocabulary
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*
---

# Small STT Eval Audio Dataset

A small speech-to-text evaluation dataset containing 92 audio samples with ground truth transcriptions. Designed for evaluating STT systems on technical vocabulary, code-switching (English/Hebrew), and various speaking styles.

## Dataset Description

This dataset contains audio recordings with accompanying transcriptions across multiple categories:

| Category | Count | Description |
|----------|-------|-------------|
| tech_github | 5 | GitHub-related technical vocabulary |
| tech_huggingface | 4 | Hugging Face platform terminology |
| tech_docker | 5 | Docker and containerization terms |
| hebrew_daily | 10 | English with Hebrew words (daily life) |
| hebrew_food | 3 | English with Hebrew food terms |
| ai_ml | 9 | AI/ML technical vocabulary |
| local_tools | 8 | Local development tools |
| conversational | 10 | Casual conversational speech |
| narrative | 6 | Narrative/storytelling style |
| instructions | 7 | Instructional content |
| tech_linux | 6 | Linux system administration |
| tech_api | 4 | API and web services |
| tech_python | 5 | Python programming |
| mixed_workflow | 5 | Mixed technical workflows |
| mixed_locale | 2 | Mixed locale content |
| tech_web | 2 | Web development |
| tech_data | 1 | Data processing |

## Audio Specifications

- **Format**: WAV (PCM signed 16-bit little-endian)
- **Sample Rate**: 16kHz
- **Channels**: Mono
- **Average Duration**: ~5-10 seconds per sample

## Dataset Structure

```
data/
  ├── metadata.csv
  ├── 001_tech_github.wav
  ├── 002_tech_github.wav
  └── ...
```

The `metadata.csv` contains:
- `file_name`: Audio filename
- `transcription`: Ground truth transcription
- `category`: Content category

## Usage

```python
from datasets import load_dataset

dataset = load_dataset("danielrosehill/Small-STT-Eval-Audio-Dataset")

# Access a sample
sample = dataset["train"][0]
print(sample["transcription"])
# Play audio: sample["audio"]
```

## Intended Use

This dataset is intended for:
- Evaluating STT model accuracy on technical vocabulary
- Testing code-switching (English/Hebrew) recognition
- Benchmarking STT systems on varied speaking styles
- Development and testing of speech recognition pipelines

## Recommended Evaluation Packages

For WER (Word Error Rate) evaluation, we recommend using text normalization to handle variations in number formatting, punctuation, and casing:

- **[whisper-normalizer](https://pypi.org/project/whisper-normalizer/)**: Text normalization for STT evaluation (handles "3000" vs "three thousand", punctuation, casing)
- **[werpy](https://pypi.org/project/werpy/)**: WER calculation with detailed error analysis

```python
from whisper_normalizer.english import EnglishTextNormalizer
from werpy import wer

normalizer = EnglishTextNormalizer()

# Normalize both reference and hypothesis before comparison
reference = normalizer(ground_truth)
hypothesis = normalizer(model_output)

error_rate = wer(reference, hypothesis)
```

## Limitations

- Small dataset size (92 samples)
- Single speaker
- Controlled recording environment
- Limited Hebrew vocabulary (loan words only, not full Hebrew speech)

## License

CC-BY-4.0