pagantibet commited on
Commit
1cf625e
·
verified ·
1 Parent(s): 9ae2f8c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +172 -3
README.md CHANGED
@@ -1,3 +1,172 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ language:
4
+ - bo
5
+ tags:
6
+ - classical-tibetan
7
+ - historical-text
8
+ - normalisation
9
+ - seq2seq
10
+ - parallel-corpus
11
+ - data-augmentation
12
+ - low-resource
13
+ - digital-humanities
14
+ size_categories:
15
+ - 1M<n<10M
16
+ task_categories:
17
+ - text2text-generation
18
+ ---
19
+
20
+ # normalisation-S2S-training
21
+
22
+ A large-scale parallel training dataset for **Classical Tibetan text normalisation**, containing approximately 2 million line pairs mapping diplomatic (non-standard, abbreviated) Tibetan manuscript text to Standard Classical Tibetan. This dataset was used to train the sequence-to-sequence normalisation models released as part of the [PaganTibet](https://www.pagantibet.com/) project.
23
+
24
+ The dataset combines a manually curated gold-standard corpus with extensively augmented data generated using four complementary strategies designed to simulate the scribal variation, abbreviation, and orthographic inconsistency characteristic of historical Tibetan manuscripts.
25
+
26
+ This dataset is part of the [PaganTibet](https://www.pagantibet.com/) project and accompanies the paper:
27
+
28
+ > Meelen, M. & Griffiths, R.M. (2026) 'Historical Tibetan Normalisation: rule-based vs neural & n-gram LM methods for extremely low-resource languages' in *Proceedings of the AI4CHIEF conference*, Springer.
29
+
30
+ Please cite the paper and the [code repository](https://github.com/pagantibet/normalisation) when using this dataset.
31
+
32
+ ---
33
+
34
+ ## Dataset Description
35
+
36
+ Classical Tibetan manuscripts present significant challenges for automatic normalisation: texts are riddled with abbreviations, non-standard spellings, diacritic variation, and scribal idiosyncrasies, while parallel training data — pairs of diplomatic input alongside normalised output — is extremely scarce. This dataset addresses that scarcity through systematic data augmentation, expanding a small gold-standard collection into a training corpus of over 2 million examples.
37
+
38
+ Each row in the dataset is a single line of Tibetan text. The dataset is structured as a **source–target parallel corpus**: source lines contain diplomatic or non-standard Tibetan, and target lines contain the corresponding Standard Classical Tibetan normalisation. Because the augmentation pipeline generates source-side variation from known target-side text, source and target lines are paired and must be used together during training.
39
+
40
+ The dataset is provided in its **non-tokenised** form. A tokenised version was also used in experiments (see Meelen & Griffiths 2026) but is not separately released, as tokenisation can be applied at training time using the scripts provided in the [Data_Preparation](https://github.com/pagantibet/normalisation/tree/main/Data_Preparation) directory.
41
+
42
+ ### Dataset Statistics
43
+
44
+ | Split | Rows |
45
+ |---|---|
46
+ | train | ~2,028,816 |
47
+
48
+ ---
49
+
50
+ ## Data Sources
51
+
52
+ The dataset draws on three underlying sources:
53
+
54
+ **1. Gold-standard parallel data (PaganTibet corpus)**
55
+ A collection of 7,421 manually normalised line pairs from the PaganTibet corpus, representing real diplomatic Tibetan manuscript text alongside its Standard Classical Tibetan normalisation. This is the only portion of the dataset containing genuine diplomatic source text; all other source-side material is synthetically generated from standard text.
56
+
57
+ **2. Standard Classical Tibetan — ACTib corpus**
58
+ The ACTib (>180 million words; [Meelen & Roux 2020](https://zenodo.org/records/3951503)) was used as the target-side basis for augmented examples. Lines were cleaned to remove non-Tibetan content (e.g. page numbers) and split into manuscript-length sequences using the `createTiblines.py` script, producing an 8-million-line pool from which training examples were drawn.
59
+
60
+ **3. Tibetan abbreviation dictionary**
61
+ A [custom-built abbreviation dictionary](https://huggingface.co/datasets/pagantibet/Tibetan-abbreviation-dictionary) of approximately 10,000 diplomatic abbreviation–expansion pairs, used in the dictionary-based augmentation strategy described below.
62
+
63
+ ---
64
+
65
+ ## Data Augmentation
66
+
67
+ To overcome the scarcity of gold parallel data, four augmentation methods were applied to generate synthetic source-side variants from standard target-side text. Each method models a different type of variation found in historical Tibetan manuscripts. Full details and the scripts used are available in the [Data_Augmentation](https://github.com/pagantibet/normalisation/tree/main/Data_Augmentation) directory of the repository.
68
+
69
+ ### 1. Random Noise Injection
70
+
71
+ A custom noise injection script simulates naturally occurring scribal variation in diplomatic texts, following the probabilistic noise formula of [Huang et al. (2023)](https://www.isca-archive.org/sigul_2023/huang23_sigul.html). The noise model introduces character substitutions, diacritic variations, and orthographic inconsistencies at frequencies calibrated to realistic manuscript variation rates.
72
+
73
+ ```bash
74
+ python3 Tibrandomnoiseaugmentation.py my_corpus.txt
75
+ ```
76
+
77
+ ### 2. OCR-Based Noise Simulation
78
+
79
+ To model errors introduced during optical character recognition of Tibetan manuscripts, the [nlpaug](https://github.com/makcedward/nlpaug) library was used to generate OCR-realistic noise patterns. This augmentation strategy targets the specific character confusions and distortions that arise when digitising historical Tibetan documents.
80
+
81
+ ```bash
82
+ python3 nlpaugtib.py --input <input_file.txt> --type nonsegmented [--aug_prob FLOAT]
83
+ ```
84
+
85
+ ### 3. Rule-Based Diplomatic Transformations
86
+
87
+ A targeted rule-based augmentation script applies character replacements reflecting common scribal conventions and variations found in historical Tibetan manuscripts. Transformations are applied stochastically at the character and syllable levels, with adjustable ratios to control the density of introduced variation.
88
+
89
+ ```bash
90
+ python3 tibrule_augmentation.py input.txt --char-ratio 0.1 --syllable-ratio 0.05
91
+ ```
92
+
93
+ ### 4. Dictionary-Based Augmentation
94
+
95
+ Entries from the Tibetan abbreviation dictionary are injected into random lines, exposing the model to a wide range of abbreviation–expansion pairs during training. This augmentation is particularly important for teaching the model to resolve the abbreviated forms that are among the most frequent and systematic deviations from standard orthography in diplomatic Tibetan texts.
96
+
97
+ ```bash
98
+ python3 dictionary-augmentation.py input.txt abbreviation-dictionary.txt
99
+ ```
100
+
101
+ ---
102
+
103
+ ## Data Preparation
104
+
105
+ Before augmentation, the raw text data was prepared in several ways:
106
+
107
+ - **Line creation**: The ACTib does not contain natural linebreaks and includes non-Tibetan material. The `createTiblines.py` script cleans the corpus and splits it into artificial lines of varying, manuscript-realistic lengths to create appropriate sequence units for training.
108
+ - **Tokenisation** (optional): Both tokenised and non-tokenised versions of the dataset were used in experiments. The non-tokenised version is provided here. To produce a tokenised version, source and target sides can be segmented using the `botokenise_src-tgt.py` script (see [Data_Preparation](https://github.com/pagantibet/normalisation/tree/main/Data_Preparation)). Note that results in Meelen & Griffiths (2026) show tokenisation is best applied *after* normalisation in a production pipeline.
109
+
110
+ ---
111
+
112
+ ## Intended Use
113
+
114
+ This dataset is intended for:
115
+
116
+ - **Training sequence-to-sequence models** for Classical Tibetan normalisation, particularly character-level encoder-decoder transformers.
117
+ - **Research on low-resource historical text normalisation**, including the study of data augmentation strategies for extremely low-resource language pairs.
118
+ - **Digital humanities** workflows aimed at producing normalised, standardised eTexts from historical Tibetan manuscript corpora.
119
+
120
+ The dataset is not suitable for evaluating normalisation performance, as the augmented source-side material is synthetically generated and does not represent a held-out sample of real diplomatic text. For evaluation data, see the gold test sets used in Meelen & Griffiths (2026), available in the [Evaluations](https://github.com/pagantibet/normalisation/tree/main/Evaluations) directory.
121
+
122
+ ---
123
+
124
+ ## Models Trained on This Dataset
125
+
126
+ | Model | Description |
127
+ |---|---|
128
+ | [`pagantibet/normalisationS2S-nontokenised`](https://huggingface.co/pagantibet/normalisationS2S-nontokenised) | Character-level Seq2Seq, non-tokenised input/output |
129
+ | [`pagantibet/normalisationS2S-tokenised`](https://huggingface.co/pagantibet/normalisationS2S-tokenised) | Character-level Seq2Seq, tokenised input/output |
130
+
131
+ ---
132
+
133
+ ## Related Resources
134
+
135
+ | Resource | Link |
136
+ |---|---|
137
+ | Abbreviation dictionary | [`pagantibet/Tibetan-abbreviation-dictionary`](https://huggingface.co/datasets/pagantibet/Tibetan-abbreviation-dictionary) |
138
+ | Non-tokenised KenLM ranker | [`pagantibet/5gram-kenLM_char`](https://huggingface.co/pagantibet/5gram-kenLM_char) |
139
+ | Tokenised KenLM ranker | [`pagantibet/5gram-kenLM_char-tok`](https://huggingface.co/pagantibet/5gram-kenLM_char-tok) |
140
+ | Data augmentation scripts | [github.com/pagantibet/normalisation/Data_Augmentation](https://github.com/pagantibet/normalisation/tree/main/Data_Augmentation) |
141
+ | Data preparation scripts | [github.com/pagantibet/normalisation/Data_Preparation](https://github.com/pagantibet/normalisation/tree/main/Data_Preparation) |
142
+ | Training scripts | [github.com/pagantibet/normalisation/Training](https://github.com/pagantibet/normalisation/tree/main/Training) |
143
+ | ACTib corpus | [Zenodo (Meelen & Roux 2020)](https://zenodo.org/records/3951503) |
144
+ | PaganTibet project | [pagantibet.com](https://www.pagantibet.com/) |
145
+
146
+ ---
147
+
148
+ ## Citation
149
+
150
+ If you use this dataset, please cite the accompanying paper and the code repository:
151
+
152
+ ```bibtex
153
+ @inproceedings{meelen-griffiths-2026-tibetan-normalisation,
154
+ author = {Meelen, Marieke and Griffiths, R.M.},
155
+ title = {Historical Tibetan Normalisation: rule-based vs neural \& n-gram LM methods for extremely low-resource languages},
156
+ booktitle = {Proceedings of the AI4CHIEF conference},
157
+ publisher = {Springer},
158
+ year = {2026}
159
+ }
160
+ ```
161
+
162
+ ---
163
+
164
+ ## License
165
+
166
+ This dataset is released under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). It may be used freely for non-commercial research and educational purposes, with attribution and under the same licence terms.
167
+
168
+ ---
169
+
170
+ ## Funding
171
+
172
+ This work was partially funded by the European Union (ERC, Pagan Tibet, grant no. 101097364). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency.