superb_ks_42

This model is a fine-tuned version of facebook/hubert-base-ls960 on the superb dataset. It achieves the following results on the evaluation set:

  • Loss: 274600.625
  • Accuracy: 0.6168
  • Test Accuracy: 0.6168
  • Df Accuracy: 0.1195
  • Unlearn Overall Accuracy: 0.7487
  • Unlearn Time: 9214.8598

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy Overall Accuracy Unlearn Overall Accuracy Time
0.0 0.5 1564 2.3320 0.2419 0.7980 0.7980 -1
0.0 1.0 3128 43.3185 0.2380 0.6914 0.6914 -1
0.0 1.5 4692 204.8400 0.2351 0.6923 0.6923 -1
0.0 2.0 6256 655.8871 0.2282 0.6939 0.6939 -1
0.0 2.5 7820 1597.6744 0.2233 0.6948 0.6948 -1
0.0 3.0 9384 3150.5979 0.2214 0.6971 0.6971 -1
0.0 3.5 10948 5529.0425 0.2047 0.7029 0.7029 -1
0.0 4.0 12512 8752.1719 0.2008 0.7015 0.7015 -1
0.0 4.5 14076 12488.5586 0.2047 0.7038 0.7038 -1
0.0 5.0 15640 17659.9004 0.2018 0.7077 0.7077 -1
0.0 5.5 17204 22695.4512 0.1832 0.7125 0.7125 -1
0.0 6.0 18768 29303.1426 0.1714 0.7121 0.7121 -1
0.0 6.5 20332 35666.2539 0.1822 0.7165 0.7165 -1
0.0 7.0 21896 43349.8242 0.1714 0.7207 0.7207 -1
0.0 7.5 23460 51642.0039 0.1655 0.7233 0.7233 -1
0.0 7.99 25024 61990.25 0.1694 0.7236 0.7236 -1
0.0 8.49 26588 71746.25 0.1616 0.7268 0.7268 -1
0.0 8.99 28152 83252.8438 0.1587 0.7286 0.7286 -1
0.0 9.49 29716 96622.7344 0.1567 0.7312 0.7312 -1
0.0 9.99 31280 106065.3594 0.1499 0.7300 0.7300 -1
0.0 10.49 32844 116312.9688 0.1538 0.7311 0.7311 -1
0.0 10.99 34408 133755.5156 0.1528 0.7325 0.7325 -1
0.0 11.49 35972 145134.125 0.1499 0.7345 0.7345 -1
0.0 11.99 37536 171217.9219 0.1430 0.7326 0.7326 -1
0.0 12.49 39100 165460.2031 0.1401 0.7375 0.7375 -1
0.0 12.99 40664 186145.6719 0.1430 0.7372 0.7372 -1
0.0 13.49 42228 190899.4375 0.1430 0.7378 0.7378 -1
0.0 13.99 43792 197004.9219 0.1381 0.7397 0.7397 -1
0.0 14.49 45356 209167.8906 0.1342 0.7398 0.7398 -1
0.0 14.99 46920 220491.7812 0.1342 0.7406 0.7406 -1
0.0 15.49 48484 239906.6094 0.1342 0.7417 0.7417 -1
0.0 15.99 50048 242919.1875 0.1342 0.7425 0.7425 -1
0.0 16.49 51612 249936.1719 0.1303 0.7431 0.7431 -1
0.0 16.99 53176 259517.2344 0.1312 0.7441 0.7441 -1
0.0 17.49 54740 261543.9531 0.1244 0.7469 0.7469 -1
0.0 17.99 56304 267596.125 0.1244 0.7465 0.7465 -1
0.0 18.49 57868 271231.7188 0.1234 0.7465 0.7465 -1
0.0 18.99 59432 272808.6562 0.1214 0.7479 0.7479 -1
0.0 19.49 60996 273415.9688 0.1205 0.7479 0.7479 -1
0.0 19.99 62560 274608.5938 0.1195 0.7487 0.7487 -1

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.2+cu118
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
1
Safetensors
Model size
94.6M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jialicheng/unlearn_speech_commands_hubert-base_bad_teaching_2_42

Finetuned
(134)
this model

Dataset used to train jialicheng/unlearn_speech_commands_hubert-base_bad_teaching_2_42

Evaluation results