hapaxlegomenon/InferBR
Viewer • Updated • 10.5k • 19
How to use felipesfpaula/bertimbau-large-InferBr-NLI with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="felipesfpaula/bertimbau-large-InferBr-NLI") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("felipesfpaula/bertimbau-large-InferBr-NLI")
model = AutoModelForSequenceClassification.from_pretrained("felipesfpaula/bertimbau-large-InferBr-NLI")felipesfpaula/bertimbau-large-InferBr-NLIneuralmind/bert-large-portuguese-casedThis model is intended for research and applications requiring Portuguese NLI, such as:
Not intended for:
neuralmind/bert-large-portuguese-cased tokenizer {0,1,2}neuralmind/bert-large-portuguese-cased These metrics were computed on the held‐out InferBR test split.
accuracy = (number of correctly predicted labels) / (total number of examples) f1_macro = unweighted average F₁ across labels {0,1,2}from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# 1. Load tokenizer and model from HuggingFace
tokenizer = AutoTokenizer.from_pretrained("felipesfpaula/bertimbau-large-InferBr-NLI")
model = AutoModelForSequenceClassification.from_pretrained("felipesfpaula/bertimbau-large-InferBr-NLI")
# 2. Encode a premise–hypothesis pair
premise = "O gato está sentado no sofá."
hypothesis = "O gato está deitado no sofá."
encoded = tokenizer(premise, hypothesis, return_tensors="pt", max_length=128, truncation=True, padding="max_length")
# 3. Run inference
with torch.no_grad():
outputs = model(**encoded)
logits = outputs.logits
pred_id = torch.argmax(logits, dim=-1).item()
# 4. Map prediction to label
label_map = {0: "Contradiction", 1: "Entailment", 2: "Neutral"}
print(f"Predicted label: {label_map[pred_id]}")
Base model
neuralmind/bert-large-portuguese-cased