|
|
--- |
|
|
tags: |
|
|
- audio |
|
|
- speech |
|
|
- whisper |
|
|
- dataset |
|
|
--- |
|
|
# atc-validation-v2 |
|
|
|
|
|
Speech dataset prepared with Trelis Studio. |
|
|
|
|
|
## Statistics |
|
|
|
|
|
| Metric | Value | |
|
|
|--------|-------| |
|
|
| Source files | 1 | |
|
|
| Validation samples | 13 | |
|
|
| Total duration | 5.0 minutes | |
|
|
|
|
|
## Columns |
|
|
|
|
|
| Column | Type | Description | |
|
|
|--------|------|-------------| |
|
|
| `audio` | Audio | Audio segment (16kHz) - speech only, silence stripped via VAD | |
|
|
| `text` | string | Plain transcription (no timestamps) - backwards compatible | |
|
|
| `text_ts` | string | Transcription WITH Whisper timestamp tokens (e.g., `<|0.00|>Hello<|0.50|>`) | |
|
|
| `start_time` | string | Segment start in original audio (HH:MM:SS.mmm) | |
|
|
| `end_time` | string | Segment end in original audio (HH:MM:SS.mmm) | |
|
|
| `speech_duration` | float | Duration of speech in segment (excluding silence) | |
|
|
| `word_timestamps` | list | Word-level timestamps (relative to speech-only audio) | |
|
|
| `source_file` | string | Original audio filename | |
|
|
|
|
|
## VAD Processing |
|
|
|
|
|
Audio segments are processed with Silero VAD to match faster-whisper inference: |
|
|
- Silence is stripped from audio (only speech regions remain) |
|
|
- Timestamps are relative to the concatenated speech audio |
|
|
- This ensures training data matches inference behavior |
|
|
|
|
|
## Training Usage |
|
|
|
|
|
For Whisper timestamp training, use the two-bucket approach: |
|
|
- **Bucket A (50%)**: Use `text` - plain transcription without timestamps |
|
|
- **Bucket B (50%)**: Use `text_ts` - transcription with Whisper timestamp tokens |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("Trelis/atc-validation-v2") |
|
|
``` |
|
|
|
|
|
--- |
|
|
*Prepared with [Trelis Studio](https://studio.trelis.com)* |
|
|
|