DistilRoBERTa-Base-Go-Emotion
Model description:
Training Parameters:
Num examples = 27,135
Num Epochs = 8
Instantaneous batch size per device = 8
Gradient Accumulation steps = 1
Total optimization steps = 43,416
TrainOutput:
'train_loss': 0.09555230289697647,
Evaluation Output:
'eval_f1_macro': 0.5073628477609134,
'eval_f1_weighted': 0.5804294635834008
- Downloads last month
- 16