metadata
dataset_info:
features:
- name: l_sura
dtype: string
- name: l_ayah
dtype: string
- name: l_word
dtype: string
- name: l_subword
dtype: string
- name: form
dtype: string
- name: tag
dtype: string
- name: features
dtype: string
- name: phoneme
dtype: string
- name: location
list: string
splits:
- name: train
num_bytes: 13752640
num_examples: 128219
download_size: 1763501
dataset_size: 13752640
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
size_categories:
- 100K<n<1M
Qur'an Phoneme + Harakat Dataset
This dataset contains phoneme-level and diacritic-level representations of the Qur'an text, based on the Quranic Arabic Corpus transliteration. It is intended for use in speech recognition and text-to-speech models, especially for phoneme-level fine-tuning of models like Whisper.
Dataset Structure
| Column | Description |
|---|---|
LOCATION |
Quranic verse location in the format (Sura:Ayah:Word:Subword) |
FORM |
Original transliterated word form (Buckwalter-style) |
TAG |
Part-of-speech tag (e.g., N = Noun, P = Preposition, ADJ = Adjective) |
FEATURES |
Morphological features and lemma information |
PHONEME |
Phoneme-level representation with diacritics, separated by underscores and spaces |
Example:
| LOCATION | FORM | TAG | FEATURES | PHONEME |
|---|---|---|---|---|
| (1:1:1:1) | bi | P | PREFIX | bi+ |
| (1:1:1:2) | somi | N | STEM | POS:N |
| (1:1:2:1) | {ll~ahi | PN | STEM | POS:PN |
Usage
You can load the dataset directly from Hugging Face Hub:
from datasets import load_dataset
dataset = load_dataset("bahriddin/quran-phoneme-harakat", split="train")
print(dataset[0])