| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - automatic-speech-recognition |
| | language: |
| | - uz |
| | --- |
| | # Speech-to-Text Evaluation Dataset |
| |
|
| | ## Dataset Overview |
| |
|
| | This dataset is designed for evaluating Uzbek speech-to-text (STT) models on real-world conversational speech data. The audio samples were collected from various open Telegram groups, capturing natural voice messages in diverse acoustic conditions and speaking styles. |
| |
|
| | ### Key Statistics |
| |
|
| | - **Total Samples**: 745 audio files |
| | - **Total Duration**: 1 hour 40 minutes (~100 minutes) |
| | - **Average Duration**: ~8 seconds per sample |
| | - **Source**: Voice messages from various open Telegram groups |
| | - **Transcriptions**: Manually annotated |
| |
|
| | ## Dataset Structure |
| |
|
| | The dataset is saved as a `datasets.Dataset` object in Arrow format, containing the following fields: |
| |
|
| | - `name`: Name of audio file |
| | - `audio`: Audio file data (dict with `array`, and `sampling_rate`) |
| | - `transcription`: Ground truth text transcription (manually annotated) |
| |
|
| | ## Loading the Dataset |
| |
|
| | ### Installation |
| |
|
| | To use this dataset, you need to install the Hugging Face `datasets` library: |
| |
|
| | ```bash |
| | pip install datasets |
| | ``` |
| |
|
| | ### Basic Loading |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load the dataset from the Arrow files |
| | dataset = load_dataset("OvozifyLabs/asr_evaluate_set") |
| | |
| | # View dataset information |
| | print(dataset) |
| | print(f"Number of samples: {len(dataset)}") |
| | ``` |
| |
|
| | ## Data Characteristics |
| |
|
| | ### Audio Properties |
| |
|
| | - **Source Domain**: Conversational voice messages from Telegram |
| | - **Variability**: Multiple speakers, diverse acoustic environments |
| | - **Recording Conditions**: Real-world |
| | - **Language**: Uzbek |
| |
|
| | ### Transcription Details |
| |
|
| | - **Annotation Method**: Manual transcription |
| | - **Quality**: Human-verified ground truth labels |
| | - **Convention**: punctuation removed, lowercased |
| |
|
| | ## Use Cases |
| |
|
| | This dataset is suitable for: |
| |
|
| | - Evaluating speech-to-text model performance on conversational speech |
| | - Benchmarking ASR systems on real-world voice messages |
| | - Testing model robustness to varied acoustic conditions |
| | - Comparing different STT models |