logion-bert-base / README.md
jbmurel's picture
Update README.md
e870209 verified
|
raw
history blame
1.2 kB
metadata
license: mit
language:
  - el
pipeline_tag: fill-mask

Logion base model

BERT-based model pretrained on largest set of pre-modern Greek to-date (70+ million words). It was introduced in this paper. This model ignores cases and accents/diacritics.

How to use

Requirements:

pip install transformers

Load the model and tokenizer directly from the HuggingFace Model Hub:

from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained("princeton-logion/logion-bert-base")
model = BertForMaskedLM.from_pretrained("princeton-logion/logion-bert-base")  

Cite

If you use this model in your research, please cite the paper:

@inproceedings{cowen-breen-etal-2023-logion,
    title = "Logion: Machine-Learning Based Detection and Correction of Textual Errors in {G}reek Philology",
    author = "Cowen-Breen, Charlie  and
      Brooks, Creston  and
      Graziosi, Barbara  and
      Haubold, Johannes",
    booktitle = "Proceedings of the Ancient Language Processing Workshop",
    year = "2023",
    url = "https://aclanthology.org/2023.alp-1.20",
    pages = "170--178",
}