| | --- |
| | language: la |
| | license: apache-2.0 |
| | inference: false |
| | --- |
| | # LaTa |
| |
|
| | The paper [Exploring Language Models for Classical Philology](https://todo.com) is the first effort to systematically provide state-of-the-art language models for Classical Philology. LaTa is a T5-base sized, monolingual, encoder-decoder variant. |
| |
|
| | This model was trained on the [Corpus Corporum](https://mlat.uzh.ch/). |
| |
|
| | Further information can be found in our paper or in our [GitHub repository](https://github.com/Heidelberg-NLP/ancient-language-models). |
| |
|
| | ## Usage |
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForConditionalGeneration |
| | |
| | tokenizer = AutoTokenizer.from_pretrained('bowphs/LaTa') |
| | model = AutoModelForConditionalGeneration.from_pretrained('bowphs/LaTa') |
| | ``` |
| | Please check out the awesome Hugging Face tutorials on how to fine-tune our models. |
| |
|
| | ## Evaluation Results |
| | When fine-tuned on lemmatization data from [EvaLatin 2022](https://universaldependencies.org/), LaTa achieves the following results: |
| |
|
| | | Task | Classical | Cross-genre | Cross-time | |
| | |:--:|:--:|:--:|:--:| |
| | | |97.30|93.95|92.26| |
| |
|
| | ## Contact |
| | If you have any questions or problems, feel free to [reach out](mailto:riemenschneider@cl.uni-heidelberg.de). |
| |
|
| | ## Citation |
| | ```bibtex |
| | @incollection{riemenschneiderfrank:2023, |
| | address = "Toronto, Canada", |
| | author = "Riemenschneider, Frederick and Frank, Anette", |
| | booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL’23)", |
| | note = "to appear", |
| | pubType = "incollection", |
| | publisher = "Association for Computational Linguistics", |
| | title = "Exploring Large Language Models for Classical Philology", |
| | url = "https://arxiv.org/abs/2305.13698", |
| | year = "2023", |
| | key = "riemenschneiderfrank:2023" |
| | } |
| | ``` |
| |
|