|
|
--- |
|
|
library_name: transformers |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- dair-ai/emotion |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- accuracy |
|
|
base_model: |
|
|
- google-bert/bert-base-uncased |
|
|
pipeline_tag: text-classification |
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
### Direct Use |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
import torch |
|
|
|
|
|
model_name = "SkyAsl/Bert-Emotion_classifier" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
|
|
|
|
text = "I am so happy to see you!" |
|
|
inputs = tokenizer(text, return_tensors="pt") |
|
|
outputs = model(**inputs) |
|
|
predicted_class = torch.argmax(outputs.logits, dim=1).item() |
|
|
|
|
|
id2label = { |
|
|
0: "sadness", 1: "joy", 2: "love", |
|
|
3: "anger", 4: "fear", 5: "surprise" |
|
|
} |
|
|
print("Predicted emotion:", id2label[predicted_class]) |
|
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
https://huggingface.co/datasets/dair-ai/emotion |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
|
|
lr = 2e-4 |
|
|
batch_size = 128 |
|
|
epochs = 5 |
|
|
weight_decay = 0.01 |
|
|
|
|
|
|
|
|
#### Metrics |
|
|
|
|
|
training_loss: 0.106100 |
|
|
validation_loss: 0.143851 |
|
|
accuracy: 0.940000 |
|
|
|