Instructions to use mbruton/gal_XLM-R with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mbruton/gal_XLM-R with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="mbruton/gal_XLM-R")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("mbruton/gal_XLM-R") model = AutoModelForTokenClassification.from_pretrained("mbruton/gal_XLM-R") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ library_name: transformers
|
|
| 10 |
pipeline_tag: token-classification
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# Model Card for GalXLM-R for Semantic Role Labeling
|
| 14 |
|
| 15 |
This model is fine-tuned on a version of [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
|
| 16 |
|
|
|
|
| 10 |
pipeline_tag: token-classification
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# Model Card for GalXLM-R for Semantic Role Labeling
|
| 14 |
|
| 15 |
This model is fine-tuned on a version of [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
|
| 16 |
|