davanstrien HF Staff commited on
Commit
35fbe73
·
1 Parent(s): 7f61646

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -16
README.md CHANGED
@@ -1,8 +1,6 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - image-classification
5
- - vision
6
  - generated_from_trainer
7
  metrics:
8
  - f1
@@ -16,10 +14,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # convnext-small-224-leicester_binary
18
 
19
- This model is a fine-tuned version of [facebook/convnext-small-224](https://huggingface.co/facebook/convnext-small-224) on the davanstrien/leicester_loaded_annotations_binary dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.2627
22
- - F1: 0.8608
23
 
24
  ## Model description
25
 
@@ -44,23 +42,43 @@ The following hyperparameters were used during training:
44
  - seed: 1337
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - num_epochs: 10.0
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | F1 |
53
  |:-------------:|:-----:|:----:|:---------------:|:------:|
54
- | No log | 1.0 | 7 | 0.5187 | 0.8608 |
55
- | 0.5904 | 2.0 | 14 | 0.4273 | 0.8608 |
56
- | 0.3981 | 3.0 | 21 | 0.4115 | 0.8608 |
57
- | 0.3981 | 4.0 | 28 | 0.4029 | 0.8608 |
58
- | 0.3285 | 5.0 | 35 | 0.3402 | 0.8608 |
59
- | 0.308 | 6.0 | 42 | 0.3138 | 0.8608 |
60
- | 0.308 | 7.0 | 49 | 0.2912 | 0.8608 |
61
- | 0.2952 | 8.0 | 56 | 0.2752 | 0.8608 |
62
- | 0.2593 | 9.0 | 63 | 0.2657 | 0.8608 |
63
- | 0.2568 | 10.0 | 70 | 0.2627 | 0.8608 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
  tags:
 
 
4
  - generated_from_trainer
5
  metrics:
6
  - f1
 
14
 
15
  # convnext-small-224-leicester_binary
16
 
17
+ This model is a fine-tuned version of [facebook/convnext-small-224](https://huggingface.co/facebook/convnext-small-224) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.1318
20
+ - F1: 0.9620
21
 
22
  ## Model description
23
 
 
42
  - seed: 1337
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - num_epochs: 30.0
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | F1 |
51
  |:-------------:|:-----:|:----:|:---------------:|:------:|
52
+ | No log | 1.0 | 7 | 0.5143 | 0.8608 |
53
+ | 0.5872 | 2.0 | 14 | 0.4215 | 0.8608 |
54
+ | 0.3903 | 3.0 | 21 | 0.4127 | 0.8608 |
55
+ | 0.3903 | 4.0 | 28 | 0.3605 | 0.8608 |
56
+ | 0.3163 | 5.0 | 35 | 0.3152 | 0.8608 |
57
+ | 0.2942 | 6.0 | 42 | 0.2942 | 0.8608 |
58
+ | 0.2942 | 7.0 | 49 | 0.2669 | 0.8608 |
59
+ | 0.2755 | 8.0 | 56 | 0.2316 | 0.8608 |
60
+ | 0.2281 | 9.0 | 63 | 0.2104 | 0.8608 |
61
+ | 0.2076 | 10.0 | 70 | 0.1938 | 0.8608 |
62
+ | 0.2076 | 11.0 | 77 | 0.1803 | 0.8608 |
63
+ | 0.1832 | 12.0 | 84 | 0.1704 | 0.8608 |
64
+ | 0.1758 | 13.0 | 91 | 0.1650 | 0.8608 |
65
+ | 0.1758 | 14.0 | 98 | 0.1714 | 0.8608 |
66
+ | 0.167 | 15.0 | 105 | 0.1575 | 0.8608 |
67
+ | 0.1519 | 16.0 | 112 | 0.1549 | 0.8608 |
68
+ | 0.1519 | 17.0 | 119 | 0.1705 | 0.8608 |
69
+ | 0.1422 | 18.0 | 126 | 0.1478 | 0.8608 |
70
+ | 0.1444 | 19.0 | 133 | 0.1437 | 0.8608 |
71
+ | 0.1396 | 20.0 | 140 | 0.1398 | 0.8608 |
72
+ | 0.1396 | 21.0 | 147 | 0.1351 | 0.8608 |
73
+ | 0.1293 | 22.0 | 154 | 0.1370 | 0.8987 |
74
+ | 0.1361 | 23.0 | 161 | 0.1335 | 0.8987 |
75
+ | 0.1361 | 24.0 | 168 | 0.1311 | 0.9367 |
76
+ | 0.1246 | 25.0 | 175 | 0.1289 | 0.9620 |
77
+ | 0.1211 | 26.0 | 182 | 0.1283 | 0.9620 |
78
+ | 0.1211 | 27.0 | 189 | 0.1294 | 0.9620 |
79
+ | 0.1182 | 28.0 | 196 | 0.1306 | 0.9620 |
80
+ | 0.1172 | 29.0 | 203 | 0.1312 | 0.9620 |
81
+ | 0.1102 | 30.0 | 210 | 0.1318 | 0.9620 |
82
 
83
 
84
  ### Framework versions