Foxasdf commited on
Commit
d8c5c1b
·
verified ·
1 Parent(s): 9d76a25

End of training

Browse files
Files changed (4) hide show
  1. README.md +93 -0
  2. config.json +54 -0
  3. model.safetensors +3 -0
  4. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: timm/tf_efficientnetv2_s.in21k
5
+ tags:
6
+ - timm
7
+ - generated_from_trainer
8
+ metrics:
9
+ - accuracy
10
+ - precision
11
+ - recall
12
+ - f1
13
+ model-index:
14
+ - name: EfficientNetV2_Small_v1
15
+ results: []
16
+ ---
17
+
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+ # EfficientNetV2_Small_v1
22
+
23
+ This model is a fine-tuned version of [timm/tf_efficientnetv2_s.in21k](https://huggingface.co/timm/tf_efficientnetv2_s.in21k) on an unknown dataset.
24
+ It achieves the following results on the evaluation set:
25
+ - Loss: 0.0340
26
+ - Accuracy: 0.9935
27
+ - Precision: 0.9981
28
+ - Recall: 0.9878
29
+ - F1: 0.9929
30
+ - Tp: 1618
31
+ - Tn: 1907
32
+ - Fp: 3
33
+ - Fn: 20
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 0.0001
53
+ - train_batch_size: 64
54
+ - eval_batch_size: 64
55
+ - seed: 42
56
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
57
+ - lr_scheduler_type: linear
58
+ - lr_scheduler_warmup_steps: 442
59
+ - num_epochs: 20
60
+ - mixed_precision_training: Native AMP
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Tp | Tn | Fp | Fn |
65
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:----:|:----:|:--:|:--:|
66
+ | 0.1995 | 1.0 | 222 | 0.1349 | 0.9628 | 0.9575 | 0.9621 | 0.9598 | 1576 | 1840 | 70 | 62 |
67
+ | 0.1442 | 2.0 | 444 | 0.0940 | 0.9789 | 0.9956 | 0.9585 | 0.9767 | 1570 | 1903 | 7 | 68 |
68
+ | 0.1625 | 3.0 | 666 | 0.0827 | 0.9837 | 0.9925 | 0.9719 | 0.9821 | 1592 | 1898 | 12 | 46 |
69
+ | 0.1592 | 4.0 | 888 | 0.0926 | 0.9752 | 0.9708 | 0.9756 | 0.9732 | 1598 | 1862 | 48 | 40 |
70
+ | 0.1100 | 5.0 | 1110 | 0.0544 | 0.9876 | 0.9950 | 0.9780 | 0.9865 | 1602 | 1902 | 8 | 36 |
71
+ | 0.1497 | 6.0 | 1332 | 0.0635 | 0.9868 | 0.9877 | 0.9835 | 0.9856 | 1611 | 1890 | 20 | 27 |
72
+ | 0.1125 | 7.0 | 1554 | 0.0485 | 0.9896 | 0.9957 | 0.9817 | 0.9886 | 1608 | 1903 | 7 | 30 |
73
+ | 0.1202 | 8.0 | 1776 | 0.0774 | 0.9794 | 0.9740 | 0.9817 | 0.9778 | 1608 | 1867 | 43 | 30 |
74
+ | 0.1031 | 9.0 | 1998 | 0.0507 | 0.9893 | 0.9938 | 0.9829 | 0.9883 | 1610 | 1900 | 10 | 28 |
75
+ | 0.1211 | 10.0 | 2220 | 0.0434 | 0.9915 | 0.9975 | 0.9841 | 0.9908 | 1612 | 1906 | 4 | 26 |
76
+ | 0.1239 | 11.0 | 2442 | 0.0400 | 0.9918 | 0.9975 | 0.9847 | 0.9911 | 1613 | 1906 | 4 | 25 |
77
+ | 0.1066 | 12.0 | 2664 | 0.0403 | 0.9927 | 0.9988 | 0.9853 | 0.9920 | 1614 | 1908 | 2 | 24 |
78
+ | 0.1065 | 13.0 | 2886 | 0.0363 | 0.9927 | 0.9994 | 0.9847 | 0.9920 | 1613 | 1909 | 1 | 25 |
79
+ | 0.1074 | 14.0 | 3108 | 0.0378 | 0.9930 | 0.9988 | 0.9860 | 0.9923 | 1615 | 1908 | 2 | 23 |
80
+ | 0.1128 | 15.0 | 3330 | 0.0327 | 0.9924 | 0.9981 | 0.9853 | 0.9917 | 1614 | 1907 | 3 | 24 |
81
+ | 0.0963 | 16.0 | 3552 | 0.0309 | 0.9930 | 0.9988 | 0.9860 | 0.9923 | 1615 | 1908 | 2 | 23 |
82
+ | 0.1379 | 17.0 | 3774 | 0.0366 | 0.9927 | 0.9969 | 0.9872 | 0.9920 | 1617 | 1905 | 5 | 21 |
83
+ | 0.1070 | 18.0 | 3996 | 0.0331 | 0.9930 | 0.9981 | 0.9866 | 0.9923 | 1616 | 1907 | 3 | 22 |
84
+ | 0.1332 | 19.0 | 4218 | 0.0343 | 0.9930 | 0.9981 | 0.9866 | 0.9923 | 1616 | 1907 | 3 | 22 |
85
+ | 0.1294 | 20.0 | 4440 | 0.0340 | 0.9935 | 0.9981 | 0.9878 | 0.9929 | 1618 | 1907 | 3 | 20 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 5.2.0
91
+ - Pytorch 2.9.0+cu126
92
+ - Datasets 4.0.0
93
+ - Tokenizers 0.22.2
config.json ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architecture": "tf_efficientnetv2_s",
3
+ "architectures": [
4
+ "TimmWrapperForImageClassification"
5
+ ],
6
+ "do_pooling": true,
7
+ "dtype": "float32",
8
+ "initializer_range": 0.02,
9
+ "label_names": [
10
+ "0",
11
+ "1"
12
+ ],
13
+ "model_args": null,
14
+ "model_type": "timm_wrapper",
15
+ "num_classes": 2,
16
+ "num_features": 1280,
17
+ "pretrained_cfg": {
18
+ "classifier": "classifier",
19
+ "crop_mode": "center",
20
+ "crop_pct": 1.0,
21
+ "custom_load": false,
22
+ "first_conv": "conv_stem",
23
+ "fixed_input_size": false,
24
+ "input_size": [
25
+ 3,
26
+ 300,
27
+ 300
28
+ ],
29
+ "interpolation": "bicubic",
30
+ "mean": [
31
+ 0.5,
32
+ 0.5,
33
+ 0.5
34
+ ],
35
+ "pool_size": [
36
+ 10,
37
+ 10
38
+ ],
39
+ "std": [
40
+ 0.5,
41
+ 0.5,
42
+ 0.5
43
+ ],
44
+ "tag": "in21k",
45
+ "test_input_size": [
46
+ 3,
47
+ 384,
48
+ 384
49
+ ]
50
+ },
51
+ "problem_type": "single_label_classification",
52
+ "transformers_version": "5.2.0",
53
+ "use_cache": false
54
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3f525be6194b9f353acc6ece96d24321759ec199c5e43fbf10d22f0c03c2821
3
+ size 81409536
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9117fdd7dfca5577dec1c1e1d1b7f51f374ab196ab2f93b7e8922bdcc002b11
3
+ size 5201