| license: mit | |
| language: | |
| - de | |
| - fr | |
| - en | |
| - la | |
| - el | |
| - it | |
| base_model: | |
| - sven-nm/XLM-R-for-classics | |
| pipeline_tag: token-classification | |
| # Model description | |
| This model is fine-tuned for Latin part-of-speech tagging using [Evalatin](https://aclanthology.org/2022.lt4hala-1.29/) for 40 epochs, beating all our other models. It can be used in an ordinary token classification pipeline. |