Instructions to use maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1") model = AutoModelForSequenceClassification.from_pretrained("maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1") - Notebooks
- Google Colab
- Kaggle
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
BERTIN-roBERTa-base-Spanish_sem_eval_2018_task_1
This is a BERTIN-roBERTa-base-Spanish model finetuned on ~3500 tweets in Spanish annotated for 11 emotion categories in SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification (also available on the Hugging Face Dataset Hub).
To quickly test it locally, use a pipeline:
from transformers import pipeline
pipe = pipeline("text-classification",model="maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1")
pipe("¡Odio tener tanto estrés!",top_k=11)
- Downloads last month
- 23