Instructions to use EMBEDDIA/litlat-bert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use EMBEDDIA/litlat-bert with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="EMBEDDIA/litlat-bert")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/litlat-bert") model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/litlat-bert") - Notebooks
- Google Colab
- Kaggle
LitLat BERT
LitLat BERT is a trilingual model, using xlm-roberta-base architecture, trained on Lithuanian, Latvian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't.
Named entity recognition evaluation
We compare LitLat BERT with multilingual BERT (mBERT), XLM-RoBERTa (XLM-R) and monolingual Latvian BERT (LVBERT) (Znotins and Barzdins, 2020). The report the results as a macro F1 score of 3 named entity classes shared in all three datasets: person, location, organization.
| Language | mBERT | XLM-R | LVBERT | LitLat |
|---|---|---|---|---|
| Latvian | 0.830 | 0.865 | 0.797 | 0.881 |
| Lithuanian | 0.797 | 0.817 | / | 0.850 |
| English | 0.939 | 0.937 | / | 0.943 |
- Downloads last month
- 130