cringgaard's picture
Model save
f22a681 verified
metadata
library_name: transformers
base_model: openai/clip-vit-large-patch14
tags:
  - generated_from_trainer
model-index:
  - name: sail-clip-hendrix-10epochs
    results: []

sail-clip-hendrix-10epochs

This model is a fine-tuned version of openai/clip-vit-large-patch14 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 6.6413

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
4.0802 0.5525 100 4.0790
3.756 1.1050 200 3.8015
3.5546 1.6575 300 3.5559
3.0525 2.2099 400 3.4319
2.8911 2.7624 500 3.3547
2.2105 3.3149 600 3.3261
2.1826 3.8674 700 3.1662
1.575 4.4199 800 3.4461
1.5164 4.9724 900 3.2057
0.8215 5.5249 1000 4.0640
0.4965 6.0773 1100 5.4518
0.4627 6.6298 1200 4.9572
0.3134 7.1823 1300 5.4385
0.2787 7.7348 1400 5.5018
0.1514 8.2873 1500 5.9184
0.1919 8.8398 1600 6.4257
0.1198 9.3923 1700 6.5754
0.0896 9.9448 1800 6.6413

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.4.0
  • Tokenizers 0.21.1