distilhubert-debiasing-age
This model is a fine-tuned version of ntu-spml/distilhubert on the audiofolder dataset. It achieves the following results on the evaluation set:
- Loss: 1.8820
- Accuracy: 0.6702
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 1.2915 | 1.0 | 117 | 1.4201 | 0.6064 |
| 1.3727 | 2.0 | 234 | 1.7079 | 0.6064 |
| 1.1821 | 3.0 | 351 | 1.6523 | 0.5851 |
| 1.3401 | 4.0 | 468 | 1.4616 | 0.6383 |
| 1.6345 | 5.0 | 585 | 1.5092 | 0.6170 |
| 1.1763 | 6.0 | 702 | 1.6093 | 0.6277 |
| 0.999 | 7.0 | 819 | 1.4447 | 0.6277 |
| 0.9842 | 8.0 | 936 | 1.5173 | 0.6702 |
| 0.9366 | 9.0 | 1053 | 1.8773 | 0.6809 |
| 0.9529 | 10.0 | 1170 | 1.8331 | 0.6489 |
| 1.2192 | 11.0 | 1287 | 2.0470 | 0.6702 |
| 0.8482 | 12.0 | 1404 | 1.9989 | 0.6809 |
| 0.9902 | 13.0 | 1521 | 2.3879 | 0.6383 |
| 1.0078 | 14.0 | 1638 | 2.1982 | 0.6809 |
| 0.9427 | 15.0 | 1755 | 1.9457 | 0.6596 |
| 0.9801 | 16.0 | 1872 | 1.9722 | 0.6702 |
| 0.9372 | 17.0 | 1989 | 1.9988 | 0.6596 |
| 0.9671 | 18.0 | 2106 | 1.8085 | 0.7128 |
| 0.9031 | 19.0 | 2223 | 1.8938 | 0.6702 |
| 0.8846 | 20.0 | 2340 | 1.8820 | 0.6702 |
Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for NiloofarMomeni/distilhubert-debiasing-age
Base model
ntu-spml/distilhubert