bert_tiny_olda_book_10_v1

This model is a fine-tuned version of on the gokulsrinivasagan/processed_book_corpus-ld dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3858
  • Accuracy: 0.6830

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 160
  • eval_batch_size: 160
  • seed: 10
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10000
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Accuracy
7.9444 0.7025 10000 7.7822 0.1645
5.4643 1.4051 20000 5.0189 0.4642
4.6357 2.1076 30000 4.2607 0.5609
4.3748 2.8102 40000 4.0260 0.5931
4.2234 3.5127 50000 3.8869 0.6126
4.124 4.2153 60000 3.8000 0.6246
4.0618 4.9178 70000 3.7356 0.6336
4.0085 5.6203 80000 3.6909 0.6401
3.9699 6.3229 90000 3.6567 0.6450
3.9326 7.0254 100000 3.6208 0.6494
3.9042 7.7280 110000 3.5874 0.6532
3.8784 8.4305 120000 3.5628 0.6563
3.8517 9.1331 130000 3.5493 0.6585
3.8371 9.8356 140000 3.5277 0.6617
3.819 10.5381 150000 3.5137 0.6635
3.8098 11.2407 160000 3.5034 0.6651
3.7923 11.9432 170000 3.4892 0.6671
3.7814 12.6458 180000 3.4750 0.6690
3.7706 13.3483 190000 3.4704 0.6698
3.7619 14.0509 200000 3.4600 0.6714
3.7517 14.7534 210000 3.4511 0.6729
3.7412 15.4560 220000 3.4419 0.6744
3.7378 16.1585 230000 3.4389 0.6750
3.7265 16.8610 240000 3.4310 0.6760
3.7196 17.5636 250000 3.4251 0.6769
3.7141 18.2661 260000 3.4204 0.6773
3.7113 18.9687 270000 3.4147 0.6784
3.7028 19.6712 280000 3.4093 0.6796
3.7006 20.3738 290000 3.4050 0.6801
3.6919 21.0763 300000 3.4009 0.6807
3.6906 21.7788 310000 3.3953 0.6817
3.6844 22.4814 320000 3.3929 0.6819
3.683 23.1839 330000 3.3906 0.6823
3.6813 23.8865 340000 3.3874 0.6828
3.6798 24.5890 350000 3.3847 0.6833

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.2.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month
3
Safetensors
Model size
33.3M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gokulsrinivasagan/bert_tiny_olda_book_10_v1

Finetunes
9 models

Dataset used to train gokulsrinivasagan/bert_tiny_olda_book_10_v1

Evaluation results

  • Accuracy on gokulsrinivasagan/processed_book_corpus-ld
    self-reported
    0.683