# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("ClassCat/roberta-small-basque")
model = AutoModelForMaskedLM.from_pretrained("ClassCat/roberta-small-basque")Quick Links
RoBERTa Basque small model (Uncased)
Prerequisites
transformers==4.19.2
Model architecture
This model uses approximately half the size of RoBERTa base model parameters.
Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
Training Data
Usage
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-small-basque')
unmasker("Zein da zure <mask> ?")
- Downloads last month
- 8
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="ClassCat/roberta-small-basque")