DistilRoBERTa-Base-Go-Emotion-LightWeight
Model description:
Training Parameters:
Num examples = 10,936
Num Epochs = 4
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 2,716
TrainOutput:
'train_loss': 0.08987611532211304,
Evaluation Output:
'eval_f1_macro': 0.4305746984659489,
'eval_f1_weighted': 0.573409096926279
GitHub Repo (Full Project)
Full training code, dataset prep, and evaluation scripts: https://github.com/SimonBis05/NLP_Emotion_Analysis
- Downloads last month
- 3