| | --- |
| | license: mit |
| | viewer: false |
| | --- |
| | |
| | #### Why creating this dataset? |
| | We present a high-quality Hebrew speech transcription dataset generated using the **Whisper Turbo** model. |
| | In contrast, **HebDB** (https://pages.cs.huji.ac.il/adiyoss-lab/HebDB/) relies on **Whisper Large**, which is demonstrably inferior to Whisper Turbo in transcription accuracy and robustness. |
| |
|
| | Furthermore, HebDB distributes audio files with **Hebrew filenames**, an avoidable design choice that introduces unnecessary friction into modern training pipelines and preprocessing workflows. |
| |
|
| | Our dataset is **deliberately engineered for research and large-scale training**, requiring no additional normalization or restructuring, and is immediately usable for training a wide range of speech and language models. |
| |
|
| |
|
| |
|