SegFormer_RESIZE_LM / README.md
AhamadShaik's picture
Training in progress epoch 13
d7aaebc
|
raw
history blame
3.22 kB
metadata
license: other
tags:
  - generated_from_keras_callback
model-index:
  - name: AhamadShaik/SegFormer_RESIZE_LM
    results: []

AhamadShaik/SegFormer_RESIZE_LM

This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0679
  • Train Dice Coef: 0.7980
  • Train Iou: 0.6817
  • Validation Loss: 0.0483
  • Validation Dice Coef: 0.8809
  • Validation Iou: 0.7881
  • Train Lr: 5e-06
  • Epoch: 13

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 5e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Train Dice Coef Train Iou Validation Loss Validation Dice Coef Validation Iou Train Lr Epoch
0.3496 0.3697 0.2435 0.2697 0.1141 0.0635 1e-04 0
0.1591 0.5600 0.4126 0.1768 0.3601 0.2345 1e-04 1
0.1295 0.6470 0.5014 0.1637 0.4628 0.3163 1e-04 2
0.1109 0.6903 0.5511 0.1319 0.5634 0.4072 1e-04 3
0.1018 0.7226 0.5858 0.0932 0.7480 0.6051 1e-04 4
0.0930 0.7373 0.6042 0.1618 0.5048 0.3614 1e-04 5
0.0878 0.7534 0.6255 0.1023 0.7076 0.5637 1e-04 6
0.0842 0.7585 0.6310 0.0878 0.7726 0.6384 1e-04 7
0.0798 0.7733 0.6475 0.0966 0.7434 0.5996 1e-04 8
0.0765 0.7716 0.6487 0.1073 0.7157 0.5657 1e-04 9
0.0701 0.7974 0.6794 0.1049 0.7190 0.5811 1e-04 10
0.0675 0.8020 0.6854 0.1319 0.6935 0.5427 1e-04 11
0.0662 0.8108 0.6957 0.1593 0.6269 0.4826 1e-04 12
0.0679 0.7980 0.6817 0.0483 0.8809 0.7881 5e-06 13

Framework versions

  • Transformers 4.27.4
  • TensorFlow 2.10.1
  • Datasets 2.11.0
  • Tokenizers 0.13.3