pagantibet's picture
Update README.md
1cf625e verified
---
license: cc-by-nc-sa-4.0
language:
- bo
tags:
- classical-tibetan
- historical-text
- normalisation
- seq2seq
- parallel-corpus
- data-augmentation
- low-resource
- digital-humanities
size_categories:
- 1M<n<10M
task_categories:
- text2text-generation
---
# normalisation-S2S-training
A large-scale parallel training dataset for **Classical Tibetan text normalisation**, containing approximately 2 million line pairs mapping diplomatic (non-standard, abbreviated) Tibetan manuscript text to Standard Classical Tibetan. This dataset was used to train the sequence-to-sequence normalisation models released as part of the [PaganTibet](https://www.pagantibet.com/) project.
The dataset combines a manually curated gold-standard corpus with extensively augmented data generated using four complementary strategies designed to simulate the scribal variation, abbreviation, and orthographic inconsistency characteristic of historical Tibetan manuscripts.
This dataset is part of the [PaganTibet](https://www.pagantibet.com/) project and accompanies the paper:
> Meelen, M. & Griffiths, R.M. (2026) 'Historical Tibetan Normalisation: rule-based vs neural & n-gram LM methods for extremely low-resource languages' in *Proceedings of the AI4CHIEF conference*, Springer.
Please cite the paper and the [code repository](https://github.com/pagantibet/normalisation) when using this dataset.
---
## Dataset Description
Classical Tibetan manuscripts present significant challenges for automatic normalisation: texts are riddled with abbreviations, non-standard spellings, diacritic variation, and scribal idiosyncrasies, while parallel training data — pairs of diplomatic input alongside normalised output — is extremely scarce. This dataset addresses that scarcity through systematic data augmentation, expanding a small gold-standard collection into a training corpus of over 2 million examples.
Each row in the dataset is a single line of Tibetan text. The dataset is structured as a **source–target parallel corpus**: source lines contain diplomatic or non-standard Tibetan, and target lines contain the corresponding Standard Classical Tibetan normalisation. Because the augmentation pipeline generates source-side variation from known target-side text, source and target lines are paired and must be used together during training.
The dataset is provided in its **non-tokenised** form. A tokenised version was also used in experiments (see Meelen & Griffiths 2026) but is not separately released, as tokenisation can be applied at training time using the scripts provided in the [Data_Preparation](https://github.com/pagantibet/normalisation/tree/main/Data_Preparation) directory.
### Dataset Statistics
| Split | Rows |
|---|---|
| train | ~2,028,816 |
---
## Data Sources
The dataset draws on three underlying sources:
**1. Gold-standard parallel data (PaganTibet corpus)**
A collection of 7,421 manually normalised line pairs from the PaganTibet corpus, representing real diplomatic Tibetan manuscript text alongside its Standard Classical Tibetan normalisation. This is the only portion of the dataset containing genuine diplomatic source text; all other source-side material is synthetically generated from standard text.
**2. Standard Classical Tibetan — ACTib corpus**
The ACTib (>180 million words; [Meelen & Roux 2020](https://zenodo.org/records/3951503)) was used as the target-side basis for augmented examples. Lines were cleaned to remove non-Tibetan content (e.g. page numbers) and split into manuscript-length sequences using the `createTiblines.py` script, producing an 8-million-line pool from which training examples were drawn.
**3. Tibetan abbreviation dictionary**
A [custom-built abbreviation dictionary](https://huggingface.co/datasets/pagantibet/Tibetan-abbreviation-dictionary) of approximately 10,000 diplomatic abbreviation–expansion pairs, used in the dictionary-based augmentation strategy described below.
---
## Data Augmentation
To overcome the scarcity of gold parallel data, four augmentation methods were applied to generate synthetic source-side variants from standard target-side text. Each method models a different type of variation found in historical Tibetan manuscripts. Full details and the scripts used are available in the [Data_Augmentation](https://github.com/pagantibet/normalisation/tree/main/Data_Augmentation) directory of the repository.
### 1. Random Noise Injection
A custom noise injection script simulates naturally occurring scribal variation in diplomatic texts, following the probabilistic noise formula of [Huang et al. (2023)](https://www.isca-archive.org/sigul_2023/huang23_sigul.html). The noise model introduces character substitutions, diacritic variations, and orthographic inconsistencies at frequencies calibrated to realistic manuscript variation rates.
```bash
python3 Tibrandomnoiseaugmentation.py my_corpus.txt
```
### 2. OCR-Based Noise Simulation
To model errors introduced during optical character recognition of Tibetan manuscripts, the [nlpaug](https://github.com/makcedward/nlpaug) library was used to generate OCR-realistic noise patterns. This augmentation strategy targets the specific character confusions and distortions that arise when digitising historical Tibetan documents.
```bash
python3 nlpaugtib.py --input <input_file.txt> --type nonsegmented [--aug_prob FLOAT]
```
### 3. Rule-Based Diplomatic Transformations
A targeted rule-based augmentation script applies character replacements reflecting common scribal conventions and variations found in historical Tibetan manuscripts. Transformations are applied stochastically at the character and syllable levels, with adjustable ratios to control the density of introduced variation.
```bash
python3 tibrule_augmentation.py input.txt --char-ratio 0.1 --syllable-ratio 0.05
```
### 4. Dictionary-Based Augmentation
Entries from the Tibetan abbreviation dictionary are injected into random lines, exposing the model to a wide range of abbreviation–expansion pairs during training. This augmentation is particularly important for teaching the model to resolve the abbreviated forms that are among the most frequent and systematic deviations from standard orthography in diplomatic Tibetan texts.
```bash
python3 dictionary-augmentation.py input.txt abbreviation-dictionary.txt
```
---
## Data Preparation
Before augmentation, the raw text data was prepared in several ways:
- **Line creation**: The ACTib does not contain natural linebreaks and includes non-Tibetan material. The `createTiblines.py` script cleans the corpus and splits it into artificial lines of varying, manuscript-realistic lengths to create appropriate sequence units for training.
- **Tokenisation** (optional): Both tokenised and non-tokenised versions of the dataset were used in experiments. The non-tokenised version is provided here. To produce a tokenised version, source and target sides can be segmented using the `botokenise_src-tgt.py` script (see [Data_Preparation](https://github.com/pagantibet/normalisation/tree/main/Data_Preparation)). Note that results in Meelen & Griffiths (2026) show tokenisation is best applied *after* normalisation in a production pipeline.
---
## Intended Use
This dataset is intended for:
- **Training sequence-to-sequence models** for Classical Tibetan normalisation, particularly character-level encoder-decoder transformers.
- **Research on low-resource historical text normalisation**, including the study of data augmentation strategies for extremely low-resource language pairs.
- **Digital humanities** workflows aimed at producing normalised, standardised eTexts from historical Tibetan manuscript corpora.
The dataset is not suitable for evaluating normalisation performance, as the augmented source-side material is synthetically generated and does not represent a held-out sample of real diplomatic text. For evaluation data, see the gold test sets used in Meelen & Griffiths (2026), available in the [Evaluations](https://github.com/pagantibet/normalisation/tree/main/Evaluations) directory.
---
## Models Trained on This Dataset
| Model | Description |
|---|---|
| [`pagantibet/normalisationS2S-nontokenised`](https://huggingface.co/pagantibet/normalisationS2S-nontokenised) | Character-level Seq2Seq, non-tokenised input/output |
| [`pagantibet/normalisationS2S-tokenised`](https://huggingface.co/pagantibet/normalisationS2S-tokenised) | Character-level Seq2Seq, tokenised input/output |
---
## Related Resources
| Resource | Link |
|---|---|
| Abbreviation dictionary | [`pagantibet/Tibetan-abbreviation-dictionary`](https://huggingface.co/datasets/pagantibet/Tibetan-abbreviation-dictionary) |
| Non-tokenised KenLM ranker | [`pagantibet/5gram-kenLM_char`](https://huggingface.co/pagantibet/5gram-kenLM_char) |
| Tokenised KenLM ranker | [`pagantibet/5gram-kenLM_char-tok`](https://huggingface.co/pagantibet/5gram-kenLM_char-tok) |
| Data augmentation scripts | [github.com/pagantibet/normalisation/Data_Augmentation](https://github.com/pagantibet/normalisation/tree/main/Data_Augmentation) |
| Data preparation scripts | [github.com/pagantibet/normalisation/Data_Preparation](https://github.com/pagantibet/normalisation/tree/main/Data_Preparation) |
| Training scripts | [github.com/pagantibet/normalisation/Training](https://github.com/pagantibet/normalisation/tree/main/Training) |
| ACTib corpus | [Zenodo (Meelen & Roux 2020)](https://zenodo.org/records/3951503) |
| PaganTibet project | [pagantibet.com](https://www.pagantibet.com/) |
---
## Citation
If you use this dataset, please cite the accompanying paper and the code repository:
```bibtex
@inproceedings{meelen-griffiths-2026-tibetan-normalisation,
author = {Meelen, Marieke and Griffiths, R.M.},
title = {Historical Tibetan Normalisation: rule-based vs neural \& n-gram LM methods for extremely low-resource languages},
booktitle = {Proceedings of the AI4CHIEF conference},
publisher = {Springer},
year = {2026}
}
```
---
## License
This dataset is released under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). It may be used freely for non-commercial research and educational purposes, with attribution and under the same licence terms.
---
## Funding
This work was partially funded by the European Union (ERC, Pagan Tibet, grant no. 101097364). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency.