File size: 3,727 Bytes
223bbe7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---

language: la
library_name: transformers
license: cc-by-sa-4.0
base_model: google/byt5-large
pipeline_tag: text2text-generation
tags:
- latin
- medieval-latin
- normalization
- legal-history
- digital-humanities
- ocr-postprocessing
widget:
- text: "viiii vt in sabbato sancto ieiunium ante noctis initium non soluatur"
  example_title: "Medieval Legal Latin"
---


# Medieval Latin Normalizer (ByT5-Large)

This model is a **ByT5-Large** transformer fine-tuned to normalize medieval Latin text. It transforms diplomatic transcriptions or noisy HTR/OCR output into a standardized normalized orthography, facilitating better downstream processing such as POS tagging, lemmatization, and linguistic analysis. The model was developed as part of the following research projects **"Embedding the Past"** (LOEWE-Exploration, TU Darmstadt) and **"Burchards Dekret Digital"** (Academy of Sciences and Literature | Mainz).

## Model Logic
Medieval Latin normalization involves handling inconsistent orthography (e.g., `u/v`, `i/j`, or `ae/e` variations) and resolving phonetic spellings common in legal and ecclesiastical manuscripts. 

By using **ByT5-Large**, the model operates directly on **UTF-8 bytes**. This is a significant advantage for Medieval Latin, as it allows the model to process non-standard characters without the information loss typical of subword tokenizers (like BERT or standard T5).

- **Input:** Raw/Diplomatic medieval Latin text.
- **Output:** Standardized/Normalized Latin text.

## Technical Specifications
- **Architecture:** [ByT5-Large](https://huggingface.co/google/byt5-large) (~1.2B parameters).
- **Hardware:** Trained on NVIDIA Blackwell GPUs using `bf16` precision and `adamw_torch_fused` optimization.
- **Training Parameters:**
  - **Learning Rate:** 2e-4
  - **Epochs:** 20
  - **Label Smoothing:** 0.1 (to improve robustness against transcription noise).
  - **Batch Size:** 48.

## Performance (Test Set)
The model was evaluated on a held-out test set (85 samples) from medieval legal corpora:

| Metric | Value |
| :--- | :--- |
| **Character Error Rate (CER)** | **1.62%** |
| **Word-Level F1-Score** | **94.12%** |
| **Evaluation Loss** | 0.143 |

## Usage
You can utilize this model through the Hugging Face `pipeline` API:

```python

from transformers import pipeline



# Initialize the normalizer

normalizer = pipeline("text2text-generation", model="mschonhardt/latin-normalizer")



# Example input

raw_text = "viiii vt in sabbato sancto ieiunium ante noctis initium non soluatur"

result = normalizer(raw_text, max_length=128)



print(f"Normalized: {result[0]['generated_text']}")



```

## Citation

If you use this model in your research, please cite:

```bibtex

@software{schonhardt_michael_2026_normalization,

  author = "Schonhardt, Michael",

  title = "Medieval Latin Normalizer",

  year = "2026",

  publisher = "Zenodo",

  doi = "10.5281/zenodo.18416639",

  url = "https://doi.org/10.5281/zenodo.18416639"

}



@article{xue-etal-2022-byt5,

    title = "{B}y{T}5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models",

    author = "Xue, Linting  and

      Barua, Aditya  and

      Constant, Noah  and

      Al-Rfou, Rami  and

      Narang, Sharan  and

      Kale, Mihir  and

      Roberts, Adam  and

      Raffel, Colin",

    editor = "Roark, Brian  and

      Nenkova, Ani",

    journal = "Transactions of the Association for Computational Linguistics",

    volume = "10",

    year = "2022",

    address = "Cambridge, MA",

    publisher = "MIT Press",

    url = "https://aclanthology.org/2022.tacl-1.17/",

    doi = "10.1162/tacl_a_00461",

    pages = "291--306"}

    

```