gpt2-NaturalQuestions_4000-ep20

This model is a fine-tuned version of gpt2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2717

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 48
  • eval_batch_size: 96
  • seed: 1799
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
1.7832 0.6 50 1.3857
1.4409 1.19 100 1.3070
1.312 1.79 150 1.2716
1.2412 2.38 200 1.2514
1.1902 2.98 250 1.2283
1.1171 3.57 300 1.2307
1.0728 4.17 350 1.2228
1.022 4.76 400 1.2145
1.0001 5.36 450 1.2225
0.9645 5.95 500 1.2183
0.9175 6.55 550 1.2185
0.911 7.14 600 1.2179
0.877 7.74 650 1.2218
0.8365 8.33 700 1.2260
0.8387 8.93 750 1.2223
0.8047 9.52 800 1.2289
0.7822 10.12 850 1.2304
0.7652 10.71 900 1.2353
0.7502 11.31 950 1.2370
0.7275 11.9 1000 1.2411
0.6998 12.5 1050 1.2515
0.7128 13.1 1100 1.2465
0.6865 13.69 1150 1.2553
0.6748 14.29 1200 1.2544
0.6661 14.88 1250 1.2563
0.6636 15.48 1300 1.2592
0.6403 16.07 1350 1.2630
0.6309 16.67 1400 1.2679
0.6281 17.26 1450 1.2667
0.6237 17.86 1500 1.2692
0.621 18.45 1550 1.2708
0.6195 19.05 1600 1.2711
0.6123 19.64 1650 1.2713

Framework versions

  • Transformers 4.29.2
  • Pytorch 1.10.0+cu111
  • Datasets 2.5.1
  • Tokenizers 0.13.3
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support