--- library_name: transformers base_model: openai/clip-vit-large-patch14 tags: - generated_from_trainer model-index: - name: cliplarge-ROCOv2-radiology-15ep results: [] --- # cliplarge-ROCOv2-radiology-15ep This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:-----:|:---------------:| | 1.0779 | 0.3294 | 500 | 1.0447 | | 0.8371 | 0.6588 | 1000 | 0.8224 | | 0.7569 | 0.9881 | 1500 | 0.7495 | | 0.5572 | 1.3175 | 2000 | 0.7123 | | 0.5883 | 1.6469 | 2500 | 0.6588 | | 0.5322 | 1.9763 | 3000 | 0.6107 | | 0.3329 | 2.3057 | 3500 | 0.6131 | | 0.3353 | 2.6350 | 4000 | 0.5954 | | 0.3149 | 2.9644 | 4500 | 0.5589 | | 0.1825 | 3.2938 | 5000 | 0.6371 | | 0.2162 | 3.6232 | 5500 | 0.6013 | | 0.215 | 3.9526 | 6000 | 0.6012 | | 0.139 | 4.2819 | 6500 | 0.6403 | | 0.1488 | 4.6113 | 7000 | 0.6309 | | 0.1499 | 4.9407 | 7500 | 0.6224 | | 0.0964 | 5.2701 | 8000 | 0.6860 | | 0.114 | 5.5995 | 8500 | 0.6665 | | 0.1081 | 5.9289 | 9000 | 0.6511 | | 0.0738 | 6.2582 | 9500 | 0.7260 | | 0.0845 | 6.5876 | 10000 | 0.6962 | | 0.0869 | 6.9170 | 10500 | 0.6943 | | 0.0608 | 7.2464 | 11000 | 0.7290 | | 0.0732 | 7.5758 | 11500 | 0.7465 | | 0.0718 | 7.9051 | 12000 | 0.7409 | | 0.0446 | 8.2345 | 12500 | 0.7592 | | 0.0502 | 8.5639 | 13000 | 0.7810 | | 0.048 | 8.8933 | 13500 | 0.7845 | | 0.0312 | 9.2227 | 14000 | 0.8026 | | 0.0388 | 9.5520 | 14500 | 0.7967 | | 0.0376 | 9.8814 | 15000 | 0.7953 | | 0.0255 | 10.2108 | 15500 | 0.8029 | | 0.0228 | 10.5402 | 16000 | 0.8011 | | 0.0295 | 10.8696 | 16500 | 0.8137 | | 0.0328 | 11.1989 | 17000 | 0.7920 | | 0.0176 | 11.5283 | 17500 | 0.7832 | | 0.0247 | 11.8577 | 18000 | 0.8009 | | 0.0159 | 12.1871 | 18500 | 0.7912 | | 0.023 | 12.5165 | 19000 | 0.8052 | | 0.0234 | 12.8458 | 19500 | 0.8105 | | 0.013 | 13.1752 | 20000 | 0.8039 | | 0.0198 | 13.5046 | 20500 | 0.7857 | | 0.0151 | 13.8340 | 21000 | 0.7990 | | 0.0123 | 14.1634 | 21500 | 0.7879 | | 0.0101 | 14.4928 | 22000 | 0.7839 | | 0.013 | 14.8221 | 22500 | 0.7840 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.1+cu124 - Datasets 4.4.1 - Tokenizers 0.19.1