license: cc-by-nc-sa-4.0
language:
- bo
tags:
- classical-tibetan
- historical-text
- normalisation
- evaluation
- gold-standard
- parallel-corpus
- ocr
- low-resource
- digital-humanities
size_categories:
- 1K<n<10K
task_categories:
- text-generation
Tibetan Normalisation - Test Data
A collection of evaluation datasets for Classical Tibetan text normalisation, containing three distinct test sets designed to assess normalisation systems under different conditions: a manually curated gold-standard set of diplomatic manuscript text, and two synthetic sets of Standard Classical Tibetan text with OCR-based noise applied. Together these test sets allow evaluation across a spectrum from clean, realistic manuscript normalisation to more controlled, large-scale noise correction scenarios.
All test sets are provided in both non-tokenised and tokenised forms (where available, via a customised version of the Botok Tibetan tokeniser), to support evaluation of both the non-tokenised and tokenised model variants. Each test set consists of paired source and target files: source files contain the noisy or diplomatic input, target files contain the corresponding Standard Classical Tibetan reference.
This dataset is part of the PaganTibet project and accompanies the paper:
Meelen, M. & Griffiths, R.M. (2026) 'Historical Tibetan Normalisation: rule-based vs neural & n-gram LM methods for extremely low-resource languages' in Proceedings of the AI4CHIEF conference, Springer.
Please cite the paper and the code repository on GitHub when using this dataset.
These datasets must not be used for training. All training material can be found in pagantibet/normalisation-S2S-training.
Dataset Overview
| Test Set | Lines | Source Type | Tokenised? |
|---|---|---|---|
| GoldTest | 217 | Diplomatic manuscript text | ✓ both |
| ACTibOCRnoiseTest | 216 | Synthetic OCR noise on ACTib | ✓ both |
| 5000ACTibOCRnoiseTest | 5,000 | Synthetic OCR noise on ACTib | ✗ non-tok only |
Note: the Hugging Face Dataset Viewer displays the dataset as a single train split — this is a technical default. All files are evaluation data and must not be used for training.
Test Sets
1. GoldTest — Gold-Standard Diplomatic Tibetan
The primary evaluation set, consisting of diplomatic Classical Tibetan manuscript text alongside manually produced Standard Classical Tibetan normalisations. This is the most challenging and most meaningful test set: source lines contain genuine scribal variation, abbreviations, non-standard orthography, and diacritic inconsistencies drawn from the PaganTibet corpus, not synthetically generated noise.
This set is held out from the training data and does not overlap with the gold-standard lines used in pagantibet/normalisation-S2S-training.
| File | Description |
|---|---|
GoldTest_source.txt |
Diplomatic source text (non-tokenised) |
GoldTest_target.txt |
Standard Classical Tibetan reference (non-tokenised) |
GoldTest_source-tok.txt |
Diplomatic source text (tokenised) |
GoldTest_target-tok.txt |
Standard Classical Tibetan reference (tokenised) |
Full evaluation results with bootstrapped confidence intervals for this test set are available in the PaganTibet GitHub repository:
2. ACTibOCRnoiseTest — ACTib with OCR Noise
A synthetic test set derived from the Standard Classical Tibetan ACTib corpus (Meelen & Roux 2020), with OCR-realistic noise applied to the source side using the nlpaug library. Source lines contain OCR-style character errors and distortions; target lines are the clean, original ACTib text. This set evaluates a model's ability to correct the specific character confusions and distortions that arise when digitising historical Tibetan documents.
Because both source and target files are derived from the ACTib corpus, this test set is more controlled than the GoldTest above and allows for cleaner measurement of OCR correction capacity in isolation from other normalisation challenges.
| File | Description |
|---|---|
ACTibOCRnoiseTest_source.txt |
OCR-noised ACTib text (non-tokenised) |
ACTibOCRnoiseTest_target.txt |
Clean ACTib reference (non-tokenised) |
ACTibOCRnoiseTest_source-tok.txt |
OCR-noised ACTib text (tokenised) |
ACTibOCRnoiseTest_target-tok.txt |
Clean ACTib reference (tokenised) |
3. 5000ACTibOCRnoiseTest — Large ACTib OCR Noise Test
A larger-scale version of the ACTib OCR noise test set, containing 5,000 lines. This set applies the same OCR-based noise simulation as the ACTibOCRnoiseTest above but at a larger scale, providing more statistical power for evaluation and confidence interval estimation without the need for bootstrapping. Only the non-tokenised version is provided.
| File | Description |
|---|---|
5000ACTibOCRnoiseTest_source.txt |
OCR-noised ACTib text, 5,000 lines (non-tokenised) |
5000ACTibOCRnoiseTest_target.txt |
Clean ACTib reference, 5,000 lines (non-tokenised) |
Intended Use
This dataset is intended for:
- Evaluating Classical Tibetan normalisation systems across different test conditions: diplomatic text (GoldTest), controlled OCR noise correction (ACTibOCRnoiseTest), and large-scale OCR correction (5000ACTibOCRnoiseTest).
- Benchmarking new normalisation approaches against the baselines reported in Meelen & Griffiths (2026), across all six inference modes supported by the PaganTibet pipeline. For more on the inference modes, see the Inference ReadMe.
- Research on low-resource historical text normalisation, OCR post-correction, and related sequence-to-sequence tasks in Classical Tibetan.
How to Evaluate with This Dataset
Evaluation is performed using the evaluate_model.py (or evaluate-model-withCIs.py) script from the PaganTibet normalisation repository. The script supports evaluating a trained neural model directly, or evaluating pre-generated prediction files from any inference method. See the Evaluation ReadMe for full documentation.
Step 1 — Generate Predictions
Run your chosen inference mode on the test source file. For example, using the recommended neural+lm+rules mode on the GoldTest:
python3 tibetan-inference-flexible.py \
--mode neural+lm+rules \
--model_path tibetan_model_nontokenized_allchars.pt \
--kenlm_path model_5gram_char.arpa \
--lm_backend python \
--rules_dict abbreviations.txt \
--input_file GoldTest_source.txt \
--output_file GoldTest_predictions.txt
The non-tokenised S2S model and non-tokenised KenLM ranker are both available on HuggingFace.
Replace GoldTest_source.txt with the appropriate source file for whichever test set you are evaluating. See the Inference ReadMe for all available inference modes and options.
Step 2 — Evaluate Predictions
python3 evaluate_model.py \
--mode predictions \
--predictions GoldTest_predictions.txt \
--test_src GoldTest_source.txt \
--test_tgt GoldTest_target.txt \
--inference_method "neural+lm+rules"
Results are saved automatically to the evaluation-results/ directory as both a JSON file and a human-readable text report.
Step 3 — Evaluate with Confidence Intervals (recommended)
For statistically robust results, use the CI variant (evaluate-model-withCIs.py) with bootstrapped confidence intervals (1,000 iterations by default). This is particularly recommended for the smaller GoldTest and ACTibOCRnoiseTest sets; for the 5,000-line set, point estimates are already more reliable.
python3 evaluate-model-withCIs.py \
--mode predictions \
--predictions GoldTest_predictions.txt \
--test_src GoldTest_source.txt \
--test_tgt GoldTest_target.txt \
--inference_method "neural+lm+rules" \
--bootstrap_n 1000
Output metrics will include 95% confidence intervals, e.g.:
Character Error Rate (CER): 4.21% (95% CI: 3.87–4.55%)
F1 Score: 93.10% (95% CI: 92.64–93.56%)
Evaluating All Test Sets
To systematically evaluate across all test sets and inference modes, name prediction files clearly so that outputs are automatically organised:
# Example: evaluate all three source files with the same model
for testset in GoldTest ACTibOCRnoiseTest 5000ACTibOCRnoiseTest; do
python3 tibetan-inference-flexible.py \
--mode neural+lm+rules \
--model_path tibetan_model_nontokenized_allchars.pt \
--kenlm_path model_5gram_char.arpa \
--lm_backend python \
--rules_dict abbreviations.txt \
--input_file ${testset}_source.txt \
--output_file predictions_${testset}_neural+lm+rules.txt
python3 evaluate_model.py \
--mode predictions \
--predictions predictions_${testset}_neural+lm+rules.txt \
--test_src ${testset}_source.txt \
--test_tgt ${testset}_target.txt \
--inference_method "neural+lm+rules"
done
Running on an HPC Cluster
A SLURM batch script is provided for running evaluation on an HPC cluster:
sbatch evaluate_model.sh
Predictions mode does not require a GPU. See the Evaluation ReadMe for SLURM configuration details.
Evaluation Metrics
The evaluation script computes the following metrics:
| Metric | Description |
|---|---|
| CER | Character Error Rate: edit distance normalised by reference length |
| Precision | Correct characters / total predicted characters |
| Recall | Correct characters / total reference characters |
| F1 | Harmonic mean of precision and recall |
| Correction Precision (CP) | Correctly corrected errors / total identified errors |
| Correction Recall (CR) | Correctly corrected errors / total errors in source |
CP and CR (following Huang et al. 2023) are particularly informative for normalisation tasks, as they specifically measure how effectively a system corrects non-standard forms rather than simply reproducing the input unchanged.
Full evaluation results for all inference modes across both tokenised and non-tokenised conditions, with confidence intervals, are available in the Evaluations directory of the repository.
Related Models and Resources
| Resource | Link |
|---|---|
| Training dataset | pagantibet/normalisation-S2S-training |
| Non-tokenised Seq2Seq model | pagantibet/normalisationS2S-nontokenised |
| Tokenised Seq2Seq model | pagantibet/normalisationS2S-tokenised |
| Non-tokenised KenLM ranker | pagantibet/5gram-kenLM_char |
| Tokenised KenLM ranker | pagantibet/5gram-kenLM_char-tok |
| Abbreviation dictionary | pagantibet/Tibetan-abbreviation-dictionary |
| Evaluation scripts | github.com/pagantibet/normalisation/Evaluations |
| Inference scripts | github.com/pagantibet/normalisation/Inference |
| Full evaluation results (non-tokenised) | Evaluations/Gold-nontokenised-CI |
| Full evaluation results (tokenised) | Evaluations/Gold-tokenised-CI |
| ACTib corpus | Zenodo (Meelen & Roux 2020) |
| PaganTibet project | pagantibet.com |
License
This dataset is released under CC BY-NC-SA 4.0. It may be used freely for non-commercial research and educational purposes, with attribution and under the same licence terms.
Funding
This work was partially funded by the European Union (ERC, Pagan Tibet, grant no. 101097364). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency.