Datasets:
language: ar
license: mit
tags:
- end-of-utterance
- dialogue
- text-classification
dataset_size: 30000
task_categories:
- text-classification
task_ids:
- multi-class-classification
Arabic End-of-Utterance (EOU) Detection Dataset – Saudi Dialect
Dataset type: Binary classification (EOU vs non-EOU)
Language: Arabic (Saudi dialect focus)
Size: 30,000 examples (≈15k positive + 15k negative)
Format: JSON list of { "text": ..., "label": ... }
Dataset Summary
This dataset contains conversational Arabic utterances labeled for End-of-Utterance (EOU) detection. The goal is to train models that can predict whether a speaker has finished speaking based on transcription text only, enabling real-time turn-taking in voice agents.
The dataset is especially tailored to Saudi dialect Arabic, but also includes general conversational Arabic.
It contains:
- Positive samples (label = 1): Full utterances representing completed turns.
- Negative samples (label = 0): Incomplete prefixes generated from each utterance to simulate non-final turns.
LiveKit Context Note
LiveKit provides up to 6 previous turns to the EOU model for prediction.
To take advantage of this, each example includes sliding window conversational context using up to 4 previous turns combined with the target utterance using the token [SEP].
This allows the model to learn turn-aware EOU detection rather than only relying on the last utterance.
Dataset Structure
Each JSON record looks like:
{
"text": "مرحبا كيف حالك [SEP] تمام الحمد لله",
"label": 1
}
text: string — full utterance or context + utterance joined with[SEP]label:1→ End of utterance0→ Not end of utterance (incomplete prefix)
How the Dataset Was Created
We used a custom Python script (included in the repo) to generate both positive and negative examples from the original ASR-aligned CSV transcripts from the Sada dataset (download here the train.csv).
1. Preprocessing
- Remove excessive spacing
- Preserve Arabic and English punctuation
- Sort utterances by timestamp
- Group by audio file name
2. Sliding Window Context
For each utterance, we generate multiple samples with context length:
0, 1, 2, 3, 4 previous turns
Example:
U1 [SEP] U2 [SEP] U3
This context design aligns with LiveKit’s turn input (up to 6 turns).
3. Positive Samples (label = 1)
Each full utterance and its context version becomes a positive example.
4. Negative Samples (label = 0)
We generate up to 5 incomplete prefixes per utterance:
Example:
"انا كنت" → 0
"انا كنت أبغى" → 0
"انا كنت أبغى اسوي" → 0
Prefixes simulate real-time partial speech coming from an STT system.
5. Balancing
- 15k positives
- 15k negatives
- Total: 30,000 examples
6. Final Shuffle
Dataset is shuffled globally to prevent ordering effects.
Intended Uses
Primary use-case
Training end-of-utterance detection models for:
- LiveKit agents
- Voice assistants
- Real-time dialog systems
- Arabic conversational AI
- Turn-taking prediction
Not intended for
- STT training
- Language modeling
- Speaker diarization
Limitations
- Primarily focused on Saudi dialect
- Generated negative prefixes may not capture all real-time ASR errors
- No audio included (text-only dataset)