dair-ai/emotion
Viewer • Updated • 437k • 34.2k • 439
How to use Kosee/roberta-base-finetuned-emotion with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Kosee/roberta-base-finetuned-emotion") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Kosee/roberta-base-finetuned-emotion")
model = AutoModelForSequenceClassification.from_pretrained("Kosee/roberta-base-finetuned-emotion")This model is a fine-tuned version of roberta-base on the emotion dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The model has trained twice. In the first run hyperparameters was the same except num_epochs was 3. So results below actually shows 8 epoch of fine-tuning in total.
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|---|---|---|---|---|---|
| 0.1508 | 1.0 | 250 | 0.1969 | 0.934 | 0.9334 |
| 0.1035 | 2.0 | 500 | 0.1660 | 0.9335 | 0.9341 |
| 0.0926 | 3.0 | 750 | 0.1626 | 0.935 | 0.9359 |
| 0.0855 | 4.0 | 1000 | 0.1680 | 0.934 | 0.9337 |
| 0.0682 | 5.0 | 1250 | 0.1669 | 0.94 | 0.9404 |