MoaazTalab commited on
Commit
618fd2b
·
verified ·
1 Parent(s): 4abff61

End of training

Browse files
Files changed (5) hide show
  1. README.md +58 -26
  2. config.json +14 -15
  3. model.safetensors +2 -2
  4. preprocessor_config.json +9 -4
  5. training_args.bin +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
- license: apache-2.0
4
- base_model: google/vit-large-patch16-224
5
  tags:
6
  - generated_from_trainer
7
  metrics:
@@ -19,17 +19,17 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # ViT_L16
21
 
22
- This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.0933
25
- - Accuracy: 0.9746
26
- - Precision: 0.9844
27
- - Recall: 0.9603
28
- - F1: 0.9722
29
- - Tp: 1573
30
- - Tn: 1885
31
- - Fp: 25
32
- - Fn: 65
33
 
34
  ## Model description
35
 
@@ -49,26 +49,58 @@ More information needed
49
 
50
  The following hyperparameters were used during training:
51
  - learning_rate: 5e-06
52
- - train_batch_size: 32
53
- - eval_batch_size: 32
54
  - seed: 42
55
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
  - lr_scheduler_type: linear
57
- - lr_scheduler_warmup_steps: 221
58
- - num_epochs: 2
59
 
60
  ### Training results
61
 
62
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Tp | Tn | Fp | Fn |
63
- |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:----:|:----:|:--:|:---:|
64
- | 0.4292 | 0.2477 | 110 | 0.1793 | 0.9464 | 0.9925 | 0.8907 | 0.9389 | 1459 | 1899 | 11 | 179 |
65
- | 0.2230 | 0.4955 | 220 | 0.1211 | 0.9651 | 0.9980 | 0.9261 | 0.9607 | 1517 | 1907 | 3 | 121 |
66
- | 0.1974 | 0.7432 | 330 | 0.1342 | 0.9690 | 0.9811 | 0.9512 | 0.9659 | 1558 | 1880 | 30 | 80 |
67
- | 0.1879 | 0.9910 | 440 | 0.1397 | 0.9628 | 0.9591 | 0.9603 | 0.9597 | 1573 | 1843 | 67 | 65 |
68
- | 0.1643 | 1.2387 | 550 | 0.1083 | 0.9741 | 0.9856 | 0.9579 | 0.9715 | 1569 | 1887 | 23 | 69 |
69
- | 0.1654 | 1.4865 | 660 | 0.0963 | 0.9715 | 0.9936 | 0.9444 | 0.9684 | 1547 | 1900 | 10 | 91 |
70
- | 0.1664 | 1.7342 | 770 | 0.1130 | 0.9693 | 0.9693 | 0.9640 | 0.9666 | 1579 | 1860 | 50 | 59 |
71
- | 0.1637 | 1.9820 | 880 | 0.0933 | 0.9746 | 0.9844 | 0.9603 | 0.9722 | 1573 | 1885 | 25 | 65 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
 
74
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: other
4
+ base_model: google/mobilenet_v2_1.4_224
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
19
 
20
  # ViT_L16
21
 
22
+ This model is a fine-tuned version of [google/mobilenet_v2_1.4_224](https://huggingface.co/google/mobilenet_v2_1.4_224) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.1493
25
+ - Accuracy: 0.9586
26
+ - Precision: 0.9825
27
+ - Recall: 0.9267
28
+ - F1: 0.9538
29
+ - Tp: 1518
30
+ - Tn: 1883
31
+ - Fp: 27
32
+ - Fn: 120
33
 
34
  ## Model description
35
 
 
49
 
50
  The following hyperparameters were used during training:
51
  - learning_rate: 5e-06
52
+ - train_batch_size: 64
53
+ - eval_batch_size: 64
54
  - seed: 42
55
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
  - lr_scheduler_type: linear
57
+ - lr_scheduler_warmup_steps: 552
58
+ - num_epochs: 10
59
 
60
  ### Training results
61
 
62
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Tp | Tn | Fp | Fn |
63
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:----:|:----:|:---:|:----:|
64
+ | 0.6498 | 0.2477 | 55 | 0.6233 | 0.5950 | 0.7445 | 0.1868 | 0.2987 | 306 | 1805 | 105 | 1332 |
65
+ | 0.6063 | 0.4955 | 110 | 0.6222 | 0.6206 | 0.7160 | 0.2955 | 0.4183 | 484 | 1718 | 192 | 1154 |
66
+ | 0.5307 | 0.7432 | 165 | 0.4872 | 0.8120 | 0.9036 | 0.6636 | 0.7652 | 1087 | 1794 | 116 | 551 |
67
+ | 0.4383 | 0.9910 | 220 | 0.4204 | 0.8695 | 0.8808 | 0.8297 | 0.8544 | 1359 | 1726 | 184 | 279 |
68
+ | 0.3821 | 1.2387 | 275 | 0.4293 | 0.8171 | 0.7416 | 0.9267 | 0.8239 | 1518 | 1381 | 529 | 120 |
69
+ | 0.3412 | 1.4865 | 330 | 0.3600 | 0.8763 | 0.8601 | 0.8742 | 0.8671 | 1432 | 1677 | 233 | 206 |
70
+ | 0.3323 | 1.7342 | 385 | 0.5002 | 0.7562 | 0.7125 | 0.7912 | 0.7498 | 1296 | 1387 | 523 | 342 |
71
+ | 0.3128 | 1.9820 | 440 | 0.3087 | 0.9073 | 0.9078 | 0.8895 | 0.8986 | 1457 | 1762 | 148 | 181 |
72
+ | 0.2916 | 2.2297 | 495 | 0.3092 | 0.9005 | 0.8640 | 0.9310 | 0.8963 | 1525 | 1670 | 240 | 113 |
73
+ | 0.2882 | 2.4775 | 550 | 0.4698 | 0.7802 | 0.6864 | 0.9646 | 0.8020 | 1580 | 1188 | 722 | 58 |
74
+ | 0.2775 | 2.7252 | 605 | 0.2448 | 0.9332 | 0.9420 | 0.9115 | 0.9265 | 1493 | 1818 | 92 | 145 |
75
+ | 0.2577 | 2.9730 | 660 | 0.2544 | 0.9239 | 0.9264 | 0.9072 | 0.9167 | 1486 | 1792 | 118 | 152 |
76
+ | 0.2541 | 3.2207 | 715 | 0.2914 | 0.9028 | 0.8542 | 0.9518 | 0.9004 | 1559 | 1644 | 266 | 79 |
77
+ | 0.2499 | 3.4685 | 770 | 0.2302 | 0.9281 | 0.9314 | 0.9115 | 0.9213 | 1493 | 1800 | 110 | 145 |
78
+ | 0.2356 | 3.7162 | 825 | 0.2430 | 0.9284 | 0.9109 | 0.9365 | 0.9235 | 1534 | 1760 | 150 | 104 |
79
+ | 0.2403 | 3.9640 | 880 | 0.2341 | 0.9169 | 0.8929 | 0.9316 | 0.9119 | 1526 | 1727 | 183 | 112 |
80
+ | 0.2454 | 4.2117 | 935 | 0.3786 | 0.8396 | 0.7642 | 0.9438 | 0.8446 | 1546 | 1433 | 477 | 92 |
81
+ | 0.2296 | 4.4595 | 990 | 0.3143 | 0.8591 | 0.8014 | 0.9237 | 0.8582 | 1513 | 1535 | 375 | 125 |
82
+ | 0.2311 | 4.7072 | 1045 | 0.3683 | 0.8238 | 0.7346 | 0.9683 | 0.8354 | 1586 | 1337 | 573 | 52 |
83
+ | 0.2181 | 4.9550 | 1100 | 0.1968 | 0.9380 | 0.9350 | 0.9304 | 0.9327 | 1524 | 1804 | 106 | 114 |
84
+ | 0.2119 | 5.2027 | 1155 | 0.3088 | 0.8661 | 0.7987 | 0.9493 | 0.8675 | 1555 | 1518 | 392 | 83 |
85
+ | 0.2222 | 5.4505 | 1210 | 0.3543 | 0.8503 | 0.7780 | 0.9457 | 0.8537 | 1549 | 1468 | 442 | 89 |
86
+ | 0.2047 | 5.6982 | 1265 | 0.1789 | 0.9462 | 0.9485 | 0.9341 | 0.9412 | 1530 | 1827 | 83 | 108 |
87
+ | 0.2169 | 5.9459 | 1320 | 0.1936 | 0.9414 | 0.9503 | 0.9212 | 0.9355 | 1509 | 1831 | 79 | 129 |
88
+ | 0.2233 | 6.1937 | 1375 | 0.2493 | 0.8949 | 0.8388 | 0.9560 | 0.8936 | 1566 | 1609 | 301 | 72 |
89
+ | 0.2245 | 6.4414 | 1430 | 0.2624 | 0.8797 | 0.8172 | 0.9524 | 0.8796 | 1560 | 1561 | 349 | 78 |
90
+ | 0.2220 | 6.6892 | 1485 | 0.2528 | 0.9101 | 0.8586 | 0.9640 | 0.9083 | 1579 | 1650 | 260 | 59 |
91
+ | 0.2158 | 6.9369 | 1540 | 0.2083 | 0.9290 | 0.9130 | 0.9353 | 0.9240 | 1532 | 1764 | 146 | 106 |
92
+ | 0.2151 | 7.1847 | 1595 | 0.1952 | 0.9394 | 0.9273 | 0.9426 | 0.9349 | 1544 | 1789 | 121 | 94 |
93
+ | 0.2192 | 7.4324 | 1650 | 0.2952 | 0.8670 | 0.7941 | 0.9609 | 0.8696 | 1574 | 1502 | 408 | 64 |
94
+ | 0.2162 | 7.6802 | 1705 | 0.2100 | 0.9247 | 0.9025 | 0.9383 | 0.9201 | 1537 | 1744 | 166 | 101 |
95
+ | 0.1981 | 7.9279 | 1760 | 0.1673 | 0.9487 | 0.9522 | 0.9359 | 0.9440 | 1533 | 1833 | 77 | 105 |
96
+ | 0.2019 | 8.1757 | 1815 | 0.2276 | 0.9146 | 0.8739 | 0.9524 | 0.9115 | 1560 | 1685 | 225 | 78 |
97
+ | 0.2292 | 8.4234 | 1870 | 0.1978 | 0.9377 | 0.9170 | 0.9512 | 0.9338 | 1558 | 1769 | 141 | 80 |
98
+ | 0.2045 | 8.6712 | 1925 | 0.1614 | 0.9546 | 0.9953 | 0.9060 | 0.9485 | 1484 | 1903 | 7 | 154 |
99
+ | 0.2145 | 8.9189 | 1980 | 0.1544 | 0.9580 | 0.9794 | 0.9286 | 0.9533 | 1521 | 1878 | 32 | 117 |
100
+ | 0.1937 | 9.1667 | 2035 | 0.1571 | 0.9515 | 0.9747 | 0.9188 | 0.9459 | 1505 | 1871 | 39 | 133 |
101
+ | 0.2071 | 9.4144 | 2090 | 0.1948 | 0.9374 | 0.9194 | 0.9475 | 0.9333 | 1552 | 1774 | 136 | 86 |
102
+ | 0.2136 | 9.6622 | 2145 | 0.2500 | 0.8861 | 0.8324 | 0.9432 | 0.8844 | 1545 | 1599 | 311 | 93 |
103
+ | 0.1980 | 9.9099 | 2200 | 0.1493 | 0.9586 | 0.9825 | 0.9267 | 0.9538 | 1518 | 1883 | 27 | 120 |
104
 
105
 
106
  ### Framework versions
config.json CHANGED
@@ -1,34 +1,33 @@
1
  {
2
  "architectures": [
3
- "ViTForImageClassification"
4
  ],
5
- "attention_probs_dropout_prob": 0.0,
 
 
6
  "dtype": "float32",
7
- "encoder_stride": 16,
8
- "hidden_act": "gelu",
9
- "hidden_dropout_prob": 0.0,
10
- "hidden_size": 1024,
11
  "id2label": {
12
  "0": "0",
13
  "1": "1"
14
  },
15
  "image_size": 224,
16
  "initializer_range": 0.02,
17
- "intermediate_size": 4096,
18
  "label2id": {
19
  "0": 0,
20
  "1": 1
21
  },
22
- "layer_norm_eps": 1e-12,
23
- "model_type": "vit",
24
- "num_attention_heads": 16,
25
  "num_channels": 3,
26
- "num_hidden_layers": 24,
27
- "patch_size": 16,
28
- "pooler_act": "tanh",
29
- "pooler_output_size": 1024,
30
  "problem_type": "single_label_classification",
31
- "qkv_bias": true,
 
32
  "transformers_version": "5.0.0",
33
  "use_cache": false
34
  }
 
1
  {
2
  "architectures": [
3
+ "MobileNetV2ForImageClassification"
4
  ],
5
+ "classifier_dropout_prob": 0.2,
6
+ "depth_divisible_by": 8,
7
+ "depth_multiplier": 1.4,
8
  "dtype": "float32",
9
+ "expand_ratio": 6,
10
+ "finegrained_output": true,
11
+ "first_layer_is_expansion": true,
12
+ "hidden_act": "relu6",
13
  "id2label": {
14
  "0": "0",
15
  "1": "1"
16
  },
17
  "image_size": 224,
18
  "initializer_range": 0.02,
 
19
  "label2id": {
20
  "0": 0,
21
  "1": 1
22
  },
23
+ "layer_norm_eps": 0.001,
24
+ "min_depth": 8,
25
+ "model_type": "mobilenet_v2",
26
  "num_channels": 3,
27
+ "output_stride": 32,
 
 
 
28
  "problem_type": "single_label_classification",
29
+ "semantic_loss_ignore_index": 255,
30
+ "tf_padding": true,
31
  "transformers_version": "5.0.0",
32
  "use_cache": false
33
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:779aeafe9a53b2a1b8a4b18e8a6a8bf637f7940dbd277f1883680150cb5fee1e
3
- size 1213261264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c1aac8021df27502e0f28aa91f5fbcb50fcb46ed9637cd52b4ee3c38854b0a9
3
+ size 17507576
preprocessor_config.json CHANGED
@@ -1,6 +1,12 @@
1
  {
2
- "do_convert_rgb": null,
 
 
 
 
 
3
  "do_normalize": true,
 
4
  "do_rescale": true,
5
  "do_resize": true,
6
  "image_mean": [
@@ -8,7 +14,7 @@
8
  0.5,
9
  0.5
10
  ],
11
- "image_processor_type": "ViTImageProcessor",
12
  "image_std": [
13
  0.5,
14
  0.5,
@@ -17,7 +23,6 @@
17
  "resample": 2,
18
  "rescale_factor": 0.00392156862745098,
19
  "size": {
20
- "height": 224,
21
- "width": 224
22
  }
23
  }
 
1
  {
2
+ "crop_size": {
3
+ "height": 224,
4
+ "width": 224
5
+ },
6
+ "data_format": "channels_first",
7
+ "do_center_crop": true,
8
  "do_normalize": true,
9
+ "do_reduce_labels": false,
10
  "do_rescale": true,
11
  "do_resize": true,
12
  "image_mean": [
 
14
  0.5,
15
  0.5
16
  ],
17
+ "image_processor_type": "MobileNetV2ImageProcessorFast",
18
  "image_std": [
19
  0.5,
20
  0.5,
 
23
  "resample": 2,
24
  "rescale_factor": 0.00392156862745098,
25
  "size": {
26
+ "shortest_edge": 256
 
27
  }
28
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a20650d184cd21a5ecf7b63db317856b5992087aa5fe9cbe6ebb9da5ccee247d
3
  size 5137
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:459ae50093485df6de25fbad1d63b321bd8f6f371a46c2f38fdb6d2d902d243a
3
  size 5137