cali_wildfires_superbuckets

This model is a fine-tuned version of gdurkin/cali_wildfires on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 39.8611

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
46.9823 0.9992 908 44.2028
43.7026 1.9994 1817 41.6333
43.1604 2.9997 2726 42.6112
42.7701 4.0 3635 41.1723
42.448 4.9992 4543 41.7719
42.1016 5.9994 5452 41.3481
41.7597 6.9997 6361 40.7160
41.931 8.0 7270 40.4156
40.8349 8.9992 8178 41.7616
41.4868 9.9994 9087 39.8706
40.8336 10.9997 9996 40.9914
40.5751 12.0 10905 40.6050
40.587 12.9992 11813 40.2944
40.2109 13.9994 12722 40.7404
40.7232 14.9997 13631 40.0876
40.0984 16.0 14540 40.0743
40.121 16.9992 15448 39.3861
39.6405 17.9994 16357 39.7561
39.7266 18.9997 17266 39.7452
39.216 20.0 18175 39.9445
39.4727 20.9992 19083 40.1958
39.6189 21.9994 19992 40.1457
39.622 22.9997 20901 40.0345
38.9822 24.0 21810 39.8065
39.4845 24.9992 22718 39.8453
39.2516 25.9994 23627 39.8971
39.099 26.9997 24536 39.8205
38.8892 28.0 25445 39.7963
38.7629 28.9992 26353 39.6499
38.6387 29.9752 27240 39.8611

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.1+cu124
  • Datasets 2.21.0
  • Tokenizers 0.20.3
Downloads last month
26
Safetensors
Model size
0.2B params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gdurkin/cali_wildfires_superbuckets

Finetuned
(1)
this model

Evaluation results