Lalith47 commited on
Commit
025296b
·
verified ·
1 Parent(s): 8782dd1

End of training

Browse files
Files changed (2) hide show
  1. README.md +38 -58
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: Lalith47/custom-cloud-model
5
  tags:
6
  - generated_from_trainer
7
  datasets:
@@ -23,7 +23,7 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.689587414264679
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,10 +31,10 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  # custom-cloud-model
33
 
34
- This model is a fine-tuned version of [Lalith47/custom-cloud-model](https://huggingface.co/Lalith47/custom-cloud-model) on the imagefolder dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 1.1373
37
- - Accuracy: 0.6896
38
 
39
  ## Model description
40
 
@@ -53,72 +53,52 @@ More information needed
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
- - learning_rate: 2e-05
57
  - train_batch_size: 16
58
  - eval_batch_size: 16
59
  - seed: 42
60
  - gradient_accumulation_steps: 4
61
  - total_train_batch_size: 64
62
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
- - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_ratio: 0.1
65
- - num_epochs: 50
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
71
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
72
- | 2.273 | 1.0 | 32 | 2.1875 | 0.1631 |
73
- | 1.8349 | 2.0 | 64 | 1.7155 | 0.4479 |
74
- | 1.4566 | 3.0 | 96 | 1.3455 | 0.5717 |
75
- | 1.1832 | 4.0 | 128 | 1.1565 | 0.5855 |
76
- | 1.0294 | 5.0 | 160 | 1.0912 | 0.6365 |
77
- | 0.9185 | 6.0 | 192 | 1.0261 | 0.6523 |
78
- | 0.7852 | 7.0 | 224 | 0.9980 | 0.6405 |
79
- | 0.7867 | 8.0 | 256 | 0.9890 | 0.6306 |
80
- | 0.6186 | 9.0 | 288 | 0.9861 | 0.6582 |
81
- | 0.6408 | 10.0 | 320 | 0.9740 | 0.6523 |
82
- | 0.5585 | 11.0 | 352 | 0.9828 | 0.6621 |
83
- | 0.5316 | 12.0 | 384 | 0.9386 | 0.6739 |
84
- | 0.4884 | 13.0 | 416 | 0.9325 | 0.6346 |
85
- | 0.466 | 14.0 | 448 | 0.9182 | 0.6739 |
86
- | 0.4018 | 15.0 | 480 | 0.9588 | 0.6660 |
87
- | 0.3776 | 16.0 | 512 | 0.9305 | 0.6778 |
88
- | 0.3858 | 17.0 | 544 | 0.9876 | 0.6582 |
89
- | 0.3312 | 18.0 | 576 | 1.0370 | 0.6287 |
90
- | 0.3323 | 19.0 | 608 | 0.9705 | 0.6817 |
91
- | 0.2896 | 20.0 | 640 | 0.9784 | 0.6876 |
92
- | 0.2703 | 21.0 | 672 | 0.9497 | 0.6916 |
93
- | 0.2521 | 22.0 | 704 | 1.0588 | 0.6601 |
94
- | 0.2079 | 23.0 | 736 | 1.0287 | 0.6582 |
95
- | 0.2371 | 24.0 | 768 | 0.9888 | 0.6739 |
96
- | 0.2604 | 25.0 | 800 | 1.0015 | 0.6621 |
97
- | 0.1952 | 26.0 | 832 | 1.0272 | 0.6837 |
98
- | 0.2373 | 27.0 | 864 | 1.0004 | 0.6778 |
99
- | 0.2195 | 28.0 | 896 | 1.0871 | 0.6601 |
100
- | 0.198 | 29.0 | 928 | 1.0482 | 0.6817 |
101
- | 0.1681 | 30.0 | 960 | 1.0531 | 0.6798 |
102
- | 0.219 | 31.0 | 992 | 1.0627 | 0.6699 |
103
- | 0.1801 | 32.0 | 1024 | 1.0884 | 0.6582 |
104
- | 0.2065 | 33.0 | 1056 | 1.1099 | 0.6660 |
105
- | 0.1526 | 34.0 | 1088 | 1.0921 | 0.6582 |
106
- | 0.1632 | 35.0 | 1120 | 1.0851 | 0.6817 |
107
- | 0.1548 | 36.0 | 1152 | 1.1042 | 0.6758 |
108
- | 0.1712 | 37.0 | 1184 | 1.1042 | 0.6719 |
109
- | 0.1393 | 38.0 | 1216 | 1.1022 | 0.6660 |
110
- | 0.1487 | 39.0 | 1248 | 1.1125 | 0.6817 |
111
- | 0.1425 | 40.0 | 1280 | 1.0962 | 0.6876 |
112
- | 0.1521 | 41.0 | 1312 | 1.0355 | 0.6857 |
113
- | 0.115 | 42.0 | 1344 | 1.1091 | 0.6562 |
114
- | 0.1353 | 43.0 | 1376 | 1.1370 | 0.6817 |
115
- | 0.1542 | 44.0 | 1408 | 1.1130 | 0.6798 |
116
- | 0.1399 | 45.0 | 1440 | 1.1029 | 0.6876 |
117
- | 0.0943 | 46.0 | 1472 | 1.1108 | 0.6876 |
118
- | 0.1367 | 47.0 | 1504 | 1.0329 | 0.6974 |
119
- | 0.1559 | 48.0 | 1536 | 1.0935 | 0.6719 |
120
- | 0.148 | 49.0 | 1568 | 1.0697 | 0.6896 |
121
- | 0.1013 | 50.0 | 1600 | 1.1373 | 0.6896 |
122
 
123
 
124
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: microsoft/swin-base-patch4-window7-224
5
  tags:
6
  - generated_from_trainer
7
  datasets:
 
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.6701570749282837
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  # custom-cloud-model
33
 
34
+ This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.6095
37
+ - Accuracy: 0.6702
38
 
39
  ## Model description
40
 
 
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
+ - learning_rate: 5e-05
57
  - train_batch_size: 16
58
  - eval_batch_size: 16
59
  - seed: 42
60
  - gradient_accumulation_steps: 4
61
  - total_train_batch_size: 64
62
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
+ - lr_scheduler_type: cosine
64
  - lr_scheduler_warmup_ratio: 0.1
65
+ - num_epochs: 30
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
71
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
72
+ | 2.0582 | 1.0 | 34 | 1.8348 | 0.3351 |
73
+ | 1.2969 | 2.0 | 68 | 1.2586 | 0.5288 |
74
+ | 0.9759 | 3.0 | 102 | 1.0590 | 0.6047 |
75
+ | 0.8705 | 4.0 | 136 | 0.9122 | 0.6466 |
76
+ | 0.6166 | 5.0 | 170 | 0.9506 | 0.6597 |
77
+ | 0.5408 | 6.0 | 204 | 0.9137 | 0.6623 |
78
+ | 0.3518 | 7.0 | 238 | 1.1081 | 0.6440 |
79
+ | 0.3488 | 8.0 | 272 | 1.0060 | 0.6545 |
80
+ | 0.3068 | 9.0 | 306 | 1.0221 | 0.6780 |
81
+ | 0.2824 | 10.0 | 340 | 1.1638 | 0.6283 |
82
+ | 0.2048 | 11.0 | 374 | 1.2044 | 0.6518 |
83
+ | 0.1972 | 12.0 | 408 | 1.2988 | 0.6440 |
84
+ | 0.1818 | 13.0 | 442 | 1.1882 | 0.6728 |
85
+ | 0.1316 | 14.0 | 476 | 1.2993 | 0.6518 |
86
+ | 0.12 | 15.0 | 510 | 1.2681 | 0.6754 |
87
+ | 0.0997 | 16.0 | 544 | 1.3582 | 0.6780 |
88
+ | 0.1069 | 17.0 | 578 | 1.3963 | 0.6571 |
89
+ | 0.078 | 18.0 | 612 | 1.4492 | 0.6675 |
90
+ | 0.0783 | 19.0 | 646 | 1.4504 | 0.6545 |
91
+ | 0.0765 | 20.0 | 680 | 1.5165 | 0.6675 |
92
+ | 0.068 | 21.0 | 714 | 1.4972 | 0.6649 |
93
+ | 0.0768 | 22.0 | 748 | 1.4949 | 0.6492 |
94
+ | 0.0631 | 23.0 | 782 | 1.5874 | 0.6754 |
95
+ | 0.0425 | 24.0 | 816 | 1.5859 | 0.6675 |
96
+ | 0.0503 | 25.0 | 850 | 1.5003 | 0.6702 |
97
+ | 0.0486 | 26.0 | 884 | 1.5484 | 0.6675 |
98
+ | 0.0383 | 27.0 | 918 | 1.5526 | 0.6780 |
99
+ | 0.036 | 28.0 | 952 | 1.6089 | 0.6623 |
100
+ | 0.0212 | 29.0 | 986 | 1.5983 | 0.6754 |
101
+ | 0.0269 | 30.0 | 1020 | 1.6095 | 0.6702 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
 
104
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:46cc33c865239016f2564b8c84605b23cab009084767e8e2994ba300717c810e
3
  size 347531616
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96ff8e75f20e77f24842e79c31a55a79f97047a152ff268eb01e2a7a09260e86
3
  size 347531616