Instructions to use lgessler/microbert-coptic-mxp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lgessler/microbert-coptic-mxp with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="lgessler/microbert-coptic-mxp")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("lgessler/microbert-coptic-mxp") model = AutoModel.from_pretrained("lgessler/microbert-coptic-mxp") - Notebooks
- Google Colab
- Kaggle
This is a MicroBERT model for Coptic.
- Its suffix is -mxp, which means that it was pretrained using supervision from masked language modeling, XPOS tagging, and UD dependency parsing.
- The unlabeled Coptic data was taken from version 4.2.0 of the Coptic SCRIPTORIUM corpus, totaling 970,642 tokens.
- The UD treebank UD_Coptic_Scriptorium, v2.9, totaling 48,632 tokens, was used for labeled data.
Please see the repository and the paper for more details.
- Downloads last month
- 9