--- language: - vi license: mit task_categories: - text-generation - text2text-generation tags: - spelling-correction - grammatical-error-correction - synthetic-data - vietnam size_categories: - 100Kn) | ## Noise Generation Logic The noise was generated using a custom script with a **0.5 noise rate** (approx. 50% of tokens affected) and guaranteed at least one error per sample. The errors mimic real-world Vietnamese typing and spelling mistakes: 1. **Teencode & Lexical Variants** (~40%): - Syllable contractions: `ng` $\to$ `g`, `nh` $\to$ `h`, `qu` $\to$ `w`, `yê` $\to$ `i`. - Phonetic substitutions: `ph` $\to$ `f`, `gi` $\to$ `j`, `c/k` $\to$ `k`. - Dictionary slang: `vợ` $\to$ `vk`, `không` $\to$ `ko`. 2. **Regional Phonological Errors** (~30%): - **North**: `tr` $\leftrightarrow$ `ch`, `s` $\leftrightarrow$ `x`, `r` $\leftrightarrow$ `d` $\leftrightarrow$ `gi`. - **South**: `n` $\leftrightarrow$ `ng` (final), `t` $\leftrightarrow$ `c`. 3. **Typing & Mechanical Errors** (~20%): - **Spatial**: Hitting adjacent keys on QWERTY keyboard. - **Telex**: Wrong accent codes (`s` $\to$ `d`), double typing (`đ` $\to$ `ddd`). - **Operations**: Random insertions, deletions, transpositions. 4. **Unaccented** (~10%): - Removing tone marks (e.g., `trường` $\to$ `truong`). ## Usage ```python from datasets import load_dataset dataset = load_dataset("coung21/vi-spelling-correction") print(dataset["train"][0]) # Output: {'source': '...', 'target': '...'} ``` ## Credits The source data for this dataset was extracted from a **Vietnamese Wikipedia dump**. The noise was synthetically generated using a custom noise injection pipeline to simulate realistic Vietnamese spelling errors.