DistilRoBERTa-Base-Go-Emotion-LightWeight

Model description:

Training Parameters:

Num examples = 10,936
Num Epochs = 4
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 2,716

TrainOutput:

'train_loss': 0.08987611532211304, 

Evaluation Output:

 'eval_f1_macro': 0.4305746984659489,
 'eval_f1_weighted': 0.573409096926279

GitHub Repo (Full Project)

GitHub Repo

Full training code, dataset prep, and evaluation scripts: https://github.com/SimonBis05/NLP_Emotion_Analysis

Downloads last month
3
Safetensors
Model size
82.1M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train YardenFadida/distilroberta-base-go-emotion-light