Instructions to use Hailay/EXLMR with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Hailay/EXLMR with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-classification", model="Hailay/EXLMR")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Hailay/EXLMR") model = AutoModelForSequenceClassification.from_pretrained("Hailay/EXLMR") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse filesThe EXLMR model is a multilingual transformer that expands the XLM-RoBERTa tokenizer by adding vocabulary for low-resource languages such as Tigrinya and Amharic. It solves issues like out-of-vocabulary words and over-tokenization, enhancing the model's ability to represent languages written in the Ge'ez script. The model can be fine-tuned for various multilingual tasks, including sentiment analysis, question answering, named entity recognition, and paraphrase detection. These improvements make EXLMR highly effective for low-resource languages, while still supporting a broad range of languages with strong overall performance.