babylm-base10m-gpt2

This model is a fine-tuned version of pretrain_service/decoder/config/config.json on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.8694
  • Accuracy: 0.4824

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
5.1122 0.0909 200 4.7293 0.3483
4.3375 0.1817 400 4.2771 0.3642
4.1386 0.2726 600 4.1027 0.3726
4.0068 0.3635 800 4.0034 0.3750
3.8941 0.4543 1000 3.9384 0.3786
3.778 0.5452 1200 3.8843 0.3818
3.7554 0.6361 1400 3.8389 0.3851
3.6796 0.7269 1600 3.7815 0.3915
3.6266 0.8178 1800 3.7347 0.3952
3.601 0.9087 2000 3.6784 0.4023
3.2525 1.8174 4000 3.3798 0.4285
3.1444 2.7260 6000 3.2090 0.4433
2.9805 3.6347 8000 3.0989 0.4543
2.8567 4.5434 10000 3.0173 0.4636
2.7515 5.4521 12000 2.9567 0.4713
2.7002 6.3607 14000 2.9195 0.4757
2.6189 7.2694 16000 2.8929 0.4792
2.6109 8.1781 18000 2.8772 0.4813

Framework versions

  • Transformers 4.50.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.4
Downloads last month
2
Safetensors
Model size
98.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alexandertam/babylm-base10m-gpt2

Finetuned
(2129)
this model