Instructions to use mbruton/gal_sp_XLM-R with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mbruton/gal_sp_XLM-R with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="mbruton/gal_sp_XLM-R")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("mbruton/gal_sp_XLM-R") model = AutoModelForTokenClassification.from_pretrained("mbruton/gal_sp_XLM-R") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -50,7 +50,7 @@ Galician is a low-resource language which prior to this project lacked a semanti
|
|
| 50 |
|
| 51 |
### Training Data
|
| 52 |
|
| 53 |
-
This model was pre-trained on the [SpanishSRL Dataset](
|
| 54 |
This model was fine-tuned on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
|
| 55 |
|
| 56 |
#### Training Hyperparameters
|
|
|
|
| 50 |
|
| 51 |
### Training Data
|
| 52 |
|
| 53 |
+
This model was pre-trained on the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
|
| 54 |
This model was fine-tuned on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
|
| 55 |
|
| 56 |
#### Training Hyperparameters
|