google-research-datasets/go_emotions
Viewer • Updated • 265k • 22.8k • 260
How to use sdeakin/fine_tuned_bert_emotions_large with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="sdeakin/fine_tuned_bert_emotions_large") # Load model directly
from transformers import AutoTokenizer, MultiLabelBert
tokenizer = AutoTokenizer.from_pretrained("sdeakin/fine_tuned_bert_emotions_large")
model = MultiLabelBert.from_pretrained("sdeakin/fine_tuned_bert_emotions_large")bert-large-uncasedexample_very_unclear)Replace with your best numbers:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "sdeakin/fine_tuned_bert_emotions_large"
tok = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "I’m excited but a bit nervous about tomorrow!"
enc = tok(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
logits = model(**enc).logits
probs = torch.sigmoid(logits)[0]
label_map = model.config.id2label
preds = [(label_map[i], probs[i].item()) for i in range(len(probs))]
print(sorted(preds, key=lambda x: x[1], reverse=True)[:5])
Base model
google-bert/bert-large-uncased