n1hal commited on
Commit
e6d0ed5
·
verified ·
1 Parent(s): 208fe93

End of training

Browse files
Files changed (5) hide show
  1. README.md +25 -25
  2. config.json +0 -0
  3. model.safetensors +2 -2
  4. preprocessor_config.json +2 -2
  5. training_args.bin +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: microsoft/swinv2-large-patch4-window12-192-22k
5
  tags:
6
  - generated_from_trainer
7
  metrics:
@@ -17,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # swinv2-plantclef
19
 
20
- This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 1.7612
23
- - Accuracy: 0.7096
24
- - F1: 0.7075
25
 
26
  ## Model description
27
 
@@ -41,8 +41,8 @@ More information needed
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 5e-05
44
- - train_batch_size: 64
45
- - eval_batch_size: 64
46
  - seed: 42
47
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
@@ -51,24 +51,24 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
55
- |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
56
- | 1.8047 | 1.0 | 19801 | 1.6662 | 0.5999 | 0.5944 |
57
- | 1.4078 | 2.0 | 39602 | 1.4697 | 0.6442 | 0.6393 |
58
- | 1.0698 | 3.0 | 59403 | 1.3977 | 0.6636 | 0.6606 |
59
- | 0.8149 | 4.0 | 79204 | 1.3933 | 0.6759 | 0.6724 |
60
- | 0.5556 | 5.0 | 99005 | 1.4412 | 0.6780 | 0.6760 |
61
- | 0.4028 | 6.0 | 118806 | 1.5032 | 0.6806 | 0.6785 |
62
- | 0.2776 | 7.0 | 138607 | 1.5777 | 0.6808 | 0.6791 |
63
- | 0.1973 | 8.0 | 158408 | 1.6136 | 0.6852 | 0.6834 |
64
- | 0.1357 | 9.0 | 178209 | 1.6761 | 0.6858 | 0.6836 |
65
- | 0.0892 | 10.0 | 198010 | 1.7073 | 0.6897 | 0.6879 |
66
- | 0.0673 | 11.0 | 217811 | 1.7313 | 0.6930 | 0.6913 |
67
- | 0.0522 | 12.0 | 237612 | 1.7440 | 0.6976 | 0.6958 |
68
- | 0.0263 | 13.0 | 257413 | 1.7616 | 0.7007 | 0.6987 |
69
- | 0.0181 | 14.0 | 277214 | 1.7694 | 0.7038 | 0.7019 |
70
- | 0.0163 | 15.0 | 297015 | 1.7671 | 0.7076 | 0.7057 |
71
- | 0.01 | 16.0 | 316816 | 1.7612 | 0.7096 | 0.7075 |
72
 
73
 
74
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: microsoft/swinv2-base-patch4-window16-256
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
17
 
18
  # swinv2-plantclef
19
 
20
+ This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.0548
23
+ - Accuracy: 0.8199
24
+ - F1: 0.8190
25
 
26
  ## Model description
27
 
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 5e-05
44
+ - train_batch_size: 32
45
+ - eval_batch_size: 32
46
  - seed: 42
47
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
 
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
55
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
56
+ | 1.1414 | 1.0 | 897 | 0.9819 | 0.7171 | 0.7046 |
57
+ | 0.654 | 2.0 | 1794 | 0.7608 | 0.7694 | 0.7688 |
58
+ | 0.394 | 3.0 | 2691 | 0.7461 | 0.7795 | 0.7767 |
59
+ | 0.2437 | 4.0 | 3588 | 0.7369 | 0.7917 | 0.7908 |
60
+ | 0.1428 | 5.0 | 4485 | 0.7939 | 0.7945 | 0.7929 |
61
+ | 0.0878 | 6.0 | 5382 | 0.8352 | 0.7958 | 0.7950 |
62
+ | 0.0621 | 7.0 | 6279 | 0.8802 | 0.7945 | 0.7928 |
63
+ | 0.0353 | 8.0 | 7176 | 0.9028 | 0.8011 | 0.8005 |
64
+ | 0.0241 | 9.0 | 8073 | 0.9592 | 0.8043 | 0.8045 |
65
+ | 0.0241 | 10.0 | 8970 | 1.0075 | 0.8068 | 0.8047 |
66
+ | 0.0129 | 11.0 | 9867 | 1.0254 | 0.8127 | 0.8120 |
67
+ | 0.0058 | 12.0 | 10764 | 1.0340 | 0.8162 | 0.8151 |
68
+ | 0.007 | 13.0 | 11661 | 1.0661 | 0.8165 | 0.8159 |
69
+ | 0.0052 | 14.0 | 12558 | 1.0533 | 0.8168 | 0.8166 |
70
+ | 0.0049 | 15.0 | 13455 | 1.0660 | 0.8174 | 0.8164 |
71
+ | 0.015 | 16.0 | 14352 | 1.0548 | 0.8199 | 0.8190 |
72
 
73
 
74
  ### Framework versions
config.json CHANGED
The diff for this file is too large to render. See raw diff
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c363dd16f6ae1c37c23da702ccced675f3d27cef3c409c26bcc2399341007d81
3
- size 828865536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a458bebb7faa08f5d1215d3dbb6957d9809006379bb27f1793f5759ef9dc1aa5
3
+ size 348047328
preprocessor_config.json CHANGED
@@ -16,7 +16,7 @@
16
  "resample": 3,
17
  "rescale_factor": 0.00392156862745098,
18
  "size": {
19
- "height": 192,
20
- "width": 192
21
  }
22
  }
 
16
  "resample": 3,
17
  "rescale_factor": 0.00392156862745098,
18
  "size": {
19
+ "height": 256,
20
+ "width": 256
21
  }
22
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8271b06b47a6efa2c4edb5c74a702b5d2d5476b5c4d0fe6a23d0c87442cb6280
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23a5b94098243fe5185696fd5080463c1cdfd84c8335489548a41f1a11e6454c
3
  size 5240