Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column() changed from object to string in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1391, in _parse
                  self.obj = DataFrame(
                             ^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/frame.py", line 778, in __init__
                  mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
                  return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
                  index = _extract_index(arrays)
                          ^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 667, in _extract_index
                  raise ValueError("If using all scalar values, you must pass an index")
              ValueError: If using all scalar values, you must pass an index
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Urdu G2P Phoneme Dictionary

Urdu G2P Banner

Dataset Description

A comprehensive Grapheme-to-Phoneme (G2P) dictionary for Urdu, containing 478,000+ word-to-IPA mappings. This is the largest publicly available Urdu phoneme dictionary, designed for:

  • 🎙️ Text-to-Speech (TTS) systems
  • 🔊 Automatic Speech Recognition (ASR)
  • 📚 Linguistic research
  • 🧠 NLP applications

Dataset Summary

Metric Value
Total Words 478,000+
Language Urdu (ur)
Script Arabic (Nastaliq)
Phoneme Format IPA (International Phonetic Alphabet)
File Format JSON

How It Works

G2P Workflow

Dataset Structure

Data Format

The dataset is provided as a JSON file with word-phoneme mappings:

{
  "پاکستان": "paːkɪsˈt̪aːn",
  "اسلام": "ɪsˈlaːm",
  "آباد": "aːˈbaːd̪",
  "زندہ": "zɪnˈd̪ə",
  "باد": "baːd̪"
}

Data Fields

  • Key (Urdu Word): The Urdu word in Arabic script (Nastaliq)
  • Value (IPA Phoneme): The IPA transcription of the word

Data Statistics

  • Unique words: 478,000+
  • Coverage: Common vocabulary, proper nouns, borrowed words
  • Diacritics: Both diacritized and non-diacritized forms included

Usage

With the urdu-g2p Library (Recommended)

pip install urdu-g2p
from urdu_g2p import UrduG2P

g2p = UrduG2P()
phonemes = g2p("پاکستان زندہ باد")
print(' '.join(phonemes))
# Output: paːkɪsˈt̪aːn zɪnˈd̪ə baːd̪

Direct Loading

import json
from huggingface_hub import hf_hub_download

# Download the dataset
file_path = hf_hub_download(
    repo_id="humair-m/urdu-g2p-dictionary",
    filename="phoneme_map.json",
    repo_type="dataset"
)

# Load the dictionary
with open(file_path, 'r', encoding='utf-8') as f:
    phoneme_dict = json.load(f)

# Use it
word = "پاکستان"
phoneme = phoneme_dict.get(word, "NOT_FOUND")
print(f"{word} -> {phoneme}")

Source Code

The complete G2P library with advanced features (fallback, normalization, caching) is available on GitHub:

🔗 GitHub Repository: https://github.com/humair-m/urdu-g2p

Features

  • 📖 478K+ Word Dictionary: Extensive vocabulary coverage
  • 🧠 Smart Fallback: espeak-ng integration for OOV words
  • High Performance: 168K+ chars/sec with LRU caching
  • 🔧 Configurable: Stress removal, tag filtering, diacritic modes
  • 🐍 Type-Safe API: Full Python type hints

License

⚠️ NON-COMMERCIAL USE ONLY

This dataset (both code and data) is licensed for non-commercial use only.

Allowed Not Allowed
✅ Academic research ❌ Commercial products/services
✅ Personal projects ❌ Monetization of any kind
✅ Educational purposes ❌ Redistribution for profit

For commercial licensing, please contact:
📧 humairmunirawan@gmail.com

See the full LICENSE on GitHub.


Citation

If you use this dataset in your research, you must cite it as follows:

BibTeX

@dataset{urdu_g2p_dictionary_2026,
  author       = {Awan, Humair Munir},
  title        = {Urdu G2P Phoneme Dictionary: A Comprehensive Urdu-to-IPA Mapping},
  year         = {2026},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/humair-m/urdu-g2p-dictionary},
  version      = {2.0.0},
  note         = {478,000+ word dictionary. Non-commercial use only.}
}

APA

Awan, H. M. (2026). Urdu G2P Phoneme Dictionary: A Comprehensive Urdu-to-IPA Mapping (Version 2.0.0) [Dataset]. Hugging Face. https://huggingface.co/datasets/humair-m/urdu-g2p-dictionary


Author

Humair Munir Awan
📧 Email: humairmunirawan@gmail.com
🔗 GitHub: github.com/humair-m


Changelog

Version 2.0.0 (January 2026)

  • Initial Hugging Face release
  • 478,000+ word-phoneme mappings
  • Full IPA transcriptions with stress markers

Made with ❤️ for the Urdu language

Downloads last month
21