This model is fine tuned with:

  • The Latin Library - 15M Token
  • Perseus Project - 15M Token

The dataset was cleaned:

  • Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
  • Use of CLTK for sentence splitting and normalisation.
  • deduplication of the corpus
  • lowercase all text
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support