| # Twitter-roBERTa-base-jun2022_sem_eval_2018_task1 | |
| This model was trained on ~7000 tweets in English annotated for 11 emotion categories in [SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification](https://competitions.codalab.org/competitions/17751) (also available on the [Hugging Face Dataset Hub](https://huggingface.co/datasets/sem_eval_2018_task_1)). | |
| The underlying model is a RoBERTa-base model trained on 132.26M tweets until the end of June 2022. Fore more details check out the [model page](https://huggingface.co/cardiffnlp/twitter-roberta-base-jun2022). | |
| To quickly test it locally, use a pipeline: | |
| ```python | |
| from transformers import pipeline | |
| pipe = pipeline("text-classification",model="maxpe/twitter-roberta-base-jun2022_sem_eval_2018_task_1") | |
| pipe("I couldn't see any seafood for a year after I went to that restaurant that they send all the tourists to!",top_k=11) | |
| ``` |