Zerpal
Collection
The largest open-source Udmurt monolingual corpora and pre-trained language models • 12 items • Updated • 1
How to use udmurtNLP/zerpal-rubert-tiny2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="udmurtNLP/zerpal-rubert-tiny2") # Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("udmurtNLP/zerpal-rubert-tiny2")
model = AutoModelForMaskedLM.from_pretrained("udmurtNLP/zerpal-rubert-tiny2")# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("udmurtNLP/zerpal-rubert-tiny2")
model = AutoModelForMaskedLM.from_pretrained("udmurtNLP/zerpal-rubert-tiny2")You can use this model directly with a pipeline for masked language modeling:
from transformers import pipeline
unmasker = pipeline('fill-mask', model='udmurtNLP/zerpal-rubert-tiny2', tokenizer='udmurtNLP/zerpal-rubert-tiny2-tokenizer')
unmasker("Ӟечбур! Мынам нимы [MASK].")
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('udmurtNLP/zerpal-rubert-tiny2-tokenizer')
model = AutoModelForMaskedLM.from_pretrained("udmurtNLP/zerpal-rubert-tiny2")
text = "Яратон, яратон, мар меда сыӵе тон?"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="udmurtNLP/zerpal-rubert-tiny2")