phonikud-data / README.md
thewh1teagle's picture
Update README.md
ec9e861 verified
---
license: mit
viewer: false
task_categories:
- text-classification
- text-to-speech
---
Dataset for [phonikud](https://github.com/thewh1teagle/phonikud) model
The datasets contains millions of clean Hebrew sentences marked with nikud and additional phonetics marks.
The format is `text<TAB>phonemes`
## Changelog (knesset_nikud - 5 millions lines)
### v6
- Fix words with `Oto` such as `Otobus` or `Otomati`
### v5
- Fix 'Kshe' instead of 'Keshe'
- Fix ALL shva prefix in knesset
### v4
- Remove too long lines based on IQR forumula
### v3
- Add Shva Na by coding rules (eg `למנרי`)
### v2
- V1 converted to txt by syllables partitioning and marking hatama
### v1
- Base csv from Dicta with Hatama and nikud
## Changelog (hedc4 - 2 million lines)
### v1
- Made new dataset from https://huggingface.co/datasets/thewh1teagle/heb-text with text<TAB>phonemes
### 🛠️ Knesset Dataset Creation Steps
1. **Sourced Raw Text**
Downloaded Hebrew parliamentary tweets from the [IsraParlTweet dataset](https://huggingface.co/datasets/guymorlan/IsraParlTweet) (CC-BY-4.0).
2. **Normalized & Filtered**
- Applied Unicode NFD normalization.
- Kept only clean sentences containing: `\n '!,.?אבגדהוזחטיךכלםמןנסעףפץצקרשת `
- Result: 7.8 million unvoweled sentences.
3. **Added Niqqud & Metadata**
- Used Dicta's diacritizer to add niqqud.
- Extracted morphological metadata for each word (prefixes, POS, stress, etc.).
- Saved to CSV: each row = full sentence, with aligned niqqud and metadata.
4. **Syllabification & Annotation**
- Marked stress and prefixes using the metadata.
- Split words into syllables.
5. **Filtered Long Sentences**
- Removed outlier-length sentences using the IQR method.
- Result: 5 million cleaned and processed sentences.
6. **Shva Handling**
- Applied heuristic rules to identify vocalic shva (e.g., in common prefixes like למנרי).
- Note: Rules are approximate and not strictly based on Academy of Hebrew standards.
7. **Built Word Frequency Lexicon**
- Stripped prefixes from words.
- Sorted unique forms by frequency of appearance.
8. **Manual Corrections**
- Identified ~1,000 high-frequency words with common stress/shva errors.
- Corrected them manually to improve data quality.
- These corrections impact hundreds of thousands to millions of occurrences.