SloBERTa model, fine-tuned for natural language inference first on 50,000 samples from ESNLI dataset, machine translated to Slovene; then fine-tuned on Slovene dataset SI-NLI.

Usage

from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("timkmecl/sloberta-esnli-sinli")
model = AutoModelForMaskedLM.from_pretrained("timkmecl/sloberta-esnli-sinli")

Expected inputs are of the form

Premisa: {premise}
Hipoteza: {hypothesis}

with strings {premise} and {hypothesis} being replaced with premise and hypothesis in Slovene.

Class 0 is entailment, class 1 neutral and 2 contradiction.

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support