pagantibet commited on
Commit
b8cd3dc
·
verified ·
1 Parent(s): a61c8ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -30
README.md CHANGED
@@ -15,22 +15,22 @@ tags:
15
  size_categories:
16
  - 1K<n<10K
17
  task_categories:
18
- - text2text-generation
19
  ---
20
 
21
- # Tibetan-normalisation-testdata
22
 
23
- A collection of **evaluation datasets** for Classical Tibetan text normalisation, containing three distinct test sets designed to assess normalisation systems under different conditions: a manually curated gold-standard set of real diplomatic manuscript text, and two synthetic sets of Standard Classical Tibetan text with OCR-based noise applied. Together these test sets allow evaluation across a spectrum from clean, realistic manuscript normalisation to more controlled, large-scale noise correction scenarios.
24
 
25
- All test sets are provided in both **non-tokenised** and **tokenised** forms (where available), to support evaluation of both the non-tokenised and tokenised model variants. Each test set consists of paired source and target files: source files contain the noisy or diplomatic input, target files contain the corresponding Standard Classical Tibetan reference.
26
 
27
  This dataset is part of the [PaganTibet](https://www.pagantibet.com/) project and accompanies the paper:
28
 
29
- > Meelen, M. & Griffiths, R.M. (2026) 'Historical Tibetan Normalisation: rule-based vs neural & n-gram LM methods for extremely low-resource languages' in *Proceedings of the AI4CHIEF conference*, Springer.
30
 
31
- Please cite the paper and the [code repository](https://github.com/pagantibet/normalisation) when using this dataset.
32
 
33
- > **These datasets must not be used for training.** All training material is in [`pagantibet/normalisation-S2S-training`](https://huggingface.co/datasets/pagantibet/normalisation-S2S-training).
34
 
35
  ---
36
 
@@ -38,7 +38,7 @@ Please cite the paper and the [code repository](https://github.com/pagantibet/no
38
 
39
  ### 1. GoldTest — Gold-Standard Diplomatic Tibetan
40
 
41
- The primary evaluation set, consisting of real diplomatic Classical Tibetan manuscript text alongside manually produced Standard Classical Tibetan normalisations. This is the most challenging and most meaningful test set: source lines contain genuine scribal variation, abbreviations, non-standard orthography, and diacritic inconsistencies drawn from the PaganTibet corpus, not synthetically generated noise.
42
 
43
  This set is held out from the training data and does not overlap with the gold-standard lines used in [`pagantibet/normalisation-S2S-training`](https://huggingface.co/datasets/pagantibet/normalisation-S2S-training).
44
 
@@ -49,7 +49,7 @@ This set is held out from the training data and does not overlap with the gold-s
49
  | `GoldTest_source-tok.txt` | Diplomatic source text (tokenised) |
50
  | `GoldTest_target-tok.txt` | Standard Classical Tibetan reference (tokenised) |
51
 
52
- Full evaluation results with bootstrapped confidence intervals for this test set are available in the repository:
53
  - [Evaluations/Gold-nontokenised-CI](https://github.com/pagantibet/normalisation/tree/main/Evaluations/Gold-nontokenised-CI)
54
  - [Evaluations/Gold-tokenised-CI](https://github.com/pagantibet/normalisation/tree/main/Evaluations/Gold-tokenised-CI)
55
 
@@ -59,7 +59,7 @@ Full evaluation results with bootstrapped confidence intervals for this test set
59
 
60
  A synthetic test set derived from the Standard Classical Tibetan ACTib corpus ([Meelen & Roux 2020](https://zenodo.org/records/3951503)), with OCR-realistic noise applied to the source side using the [nlpaug](https://github.com/makcedward/nlpaug) library. Source lines contain OCR-style character errors and distortions; target lines are the clean, original ACTib text. This set evaluates a model's ability to correct the specific character confusions and distortions that arise when digitising historical Tibetan documents.
61
 
62
- Because both source and target sides are derived from the ACTib (standard text with synthetic noise), this test set is more controlled than the GoldTest and allows for cleaner measurement of OCR correction capacity in isolation from other normalisation challenges.
63
 
64
  | File | Description |
65
  |---|---|
@@ -81,12 +81,12 @@ A larger-scale version of the ACTib OCR noise test set, containing 5,000 lines.
81
 
82
  ---
83
 
84
- ## Dataset Statistics
85
 
86
  | Test Set | Lines | Source Type | Tokenised? |
87
  |---|---|---|---|
88
- | GoldTest | ~700 | Real diplomatic manuscript text | ✓ both |
89
- | ACTibOCRnoiseTest | ~700 | Synthetic OCR noise on ACTib | ✓ both |
90
  | 5000ACTibOCRnoiseTest | 5,000 | Synthetic OCR noise on ACTib | ✗ non-tok only |
91
 
92
  *Note: the Hugging Face Dataset Viewer displays the dataset as a single `train` split — this is a technical default. All files are evaluation data and must not be used for training.*
@@ -233,22 +233,6 @@ Full evaluation results for all inference modes across both tokenised and non-to
233
 
234
  ---
235
 
236
- ## Citation
237
-
238
- If you use this dataset, please cite the accompanying paper and the code repository:
239
-
240
- ```bibtex
241
- @inproceedings{meelen-griffiths-2026-tibetan-normalisation,
242
- author = {Meelen, Marieke and Griffiths, R.M.},
243
- title = {Historical Tibetan Normalisation: rule-based vs neural \& n-gram LM methods for extremely low-resource languages},
244
- booktitle = {Proceedings of the AI4CHIEF conference},
245
- publisher = {Springer},
246
- year = {2026}
247
- }
248
- ```
249
-
250
- ---
251
-
252
  ## License
253
 
254
  This dataset is released under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). It may be used freely for non-commercial research and educational purposes, with attribution and under the same licence terms.
@@ -257,4 +241,4 @@ This dataset is released under [CC BY-NC-SA 4.0](https://creativecommons.org/lic
257
 
258
  ## Funding
259
 
260
- This work was partially funded by the European Union (ERC, Pagan Tibet, grant no. 101097364). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency.
 
15
  size_categories:
16
  - 1K<n<10K
17
  task_categories:
18
+ - text-generation
19
  ---
20
 
21
+ # Tibetan Normalisation - Test Data
22
 
23
+ A collection of evaluation datasets for Classical Tibetan text normalisation, containing three distinct test sets designed to assess normalisation systems under different conditions: a manually curated gold-standard set of diplomatic manuscript text, and two synthetic sets of Standard Classical Tibetan text with OCR-based noise applied. Together these test sets allow evaluation across a spectrum from clean, realistic manuscript normalisation to more controlled, large-scale noise correction scenarios.
24
 
25
+ All test sets are provided in both non-tokenised and tokenised forms (where available), to support evaluation of both the non-tokenised and tokenised model variants. Each test set consists of paired source and target files: source files contain the noisy or diplomatic input, target files contain the corresponding Standard Classical Tibetan reference.
26
 
27
  This dataset is part of the [PaganTibet](https://www.pagantibet.com/) project and accompanies the paper:
28
 
29
+ Meelen, M. & Griffiths, R.M. (2026) 'Historical Tibetan Normalisation: rule-based vs neural & n-gram LM methods for extremely low-resource languages' in *Proceedings of the AI4CHIEF conference*, Springer.
30
 
31
+ Please cite the paper and the [code repository on GitHub](https://github.com/pagantibet/normalisation) when using this dataset.
32
 
33
+ **These datasets must not be used for training.** All training material can be found in [`pagantibet/normalisation-S2S-training`](https://huggingface.co/datasets/pagantibet/normalisation-S2S-training).
34
 
35
  ---
36
 
 
38
 
39
  ### 1. GoldTest — Gold-Standard Diplomatic Tibetan
40
 
41
+ The primary evaluation set, consisting of diplomatic Classical Tibetan manuscript text alongside manually produced Standard Classical Tibetan normalisations. This is the most challenging and most meaningful test set: source lines contain genuine scribal variation, abbreviations, non-standard orthography, and diacritic inconsistencies drawn from the PaganTibet corpus, not synthetically generated noise.
42
 
43
  This set is held out from the training data and does not overlap with the gold-standard lines used in [`pagantibet/normalisation-S2S-training`](https://huggingface.co/datasets/pagantibet/normalisation-S2S-training).
44
 
 
49
  | `GoldTest_source-tok.txt` | Diplomatic source text (tokenised) |
50
  | `GoldTest_target-tok.txt` | Standard Classical Tibetan reference (tokenised) |
51
 
52
+ Full evaluation results with bootstrapped confidence intervals for this test set are available in the [PaganTibet GitHub repository](https://github.com/pagantibet/normalisation/tree/main):
53
  - [Evaluations/Gold-nontokenised-CI](https://github.com/pagantibet/normalisation/tree/main/Evaluations/Gold-nontokenised-CI)
54
  - [Evaluations/Gold-tokenised-CI](https://github.com/pagantibet/normalisation/tree/main/Evaluations/Gold-tokenised-CI)
55
 
 
59
 
60
  A synthetic test set derived from the Standard Classical Tibetan ACTib corpus ([Meelen & Roux 2020](https://zenodo.org/records/3951503)), with OCR-realistic noise applied to the source side using the [nlpaug](https://github.com/makcedward/nlpaug) library. Source lines contain OCR-style character errors and distortions; target lines are the clean, original ACTib text. This set evaluates a model's ability to correct the specific character confusions and distortions that arise when digitising historical Tibetan documents.
61
 
62
+ Because both source and target files are derived from the ACTib corpus, this test set is more controlled than the GoldTest above and allows for cleaner measurement of OCR correction capacity in isolation from other normalisation challenges.
63
 
64
  | File | Description |
65
  |---|---|
 
81
 
82
  ---
83
 
84
+ ## Dataset Overview
85
 
86
  | Test Set | Lines | Source Type | Tokenised? |
87
  |---|---|---|---|
88
+ | GoldTest | 217 | Diplomatic manuscript text | ✓ both |
89
+ | ACTibOCRnoiseTest | 216 | Synthetic OCR noise on ACTib | ✓ both |
90
  | 5000ACTibOCRnoiseTest | 5,000 | Synthetic OCR noise on ACTib | ✗ non-tok only |
91
 
92
  *Note: the Hugging Face Dataset Viewer displays the dataset as a single `train` split — this is a technical default. All files are evaluation data and must not be used for training.*
 
233
 
234
  ---
235
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
236
  ## License
237
 
238
  This dataset is released under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). It may be used freely for non-commercial research and educational purposes, with attribution and under the same licence terms.
 
241
 
242
  ## Funding
243
 
244
+ This work was partially funded by the European Union (ERC, Pagan Tibet, grant no. 101097364). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency.