thethinkmachine commited on
Commit
438b538
·
verified ·
1 Parent(s): 0ed9d5f

End of training

Browse files
Files changed (4) hide show
  1. README.md +88 -0
  2. config.json +13 -0
  3. model.safetensors +3 -0
  4. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: EfficientNetV2-S-FacesMTL-EXP1
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # EfficientNetV2-S-FacesMTL-EXP1
14
+
15
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Gender Accuracy: 0.9219
18
+ - Gender F1: 0.8960
19
+ - Age Mae: 5.4856
20
+ - Age Rmse: 7.7077
21
+ - Loss: 59.6107
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0001
41
+ - train_batch_size: 32
42
+ - eval_batch_size: 32
43
+ - seed: 42
44
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
+ - lr_scheduler_type: cosine
46
+ - num_epochs: 5
47
+ - mixed_precision_training: Native AMP
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Gender Accuracy | Gender F1 | Age Mae | Age Rmse | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|:--------:|:---------------:|
53
+ | 141.0082 | 0.1728 | 150 | 0.7669 | 0.4982 | 9.8306 | 13.2003 | 174.7533 |
54
+ | 115.015 | 0.3456 | 300 | 0.8179 | 0.6641 | 7.2638 | 9.8329 | 97.0897 |
55
+ | 114.0637 | 0.5184 | 450 | 0.8727 | 0.8262 | 7.6025 | 10.2955 | 106.4104 |
56
+ | 108.9411 | 0.6912 | 600 | 0.8355 | 0.7141 | 6.4249 | 8.5998 | 74.3246 |
57
+ | 102.9268 | 0.8641 | 750 | 0.8862 | 0.8405 | 6.3497 | 8.6367 | 74.9009 |
58
+ | 74.0264 | 1.0369 | 900 | 0.8911 | 0.8478 | 6.0467 | 8.1970 | 67.4731 |
59
+ | 69.2823 | 1.2097 | 1050 | 0.8741 | 0.8048 | 7.7731 | 9.7514 | 95.3764 |
60
+ | 75.1102 | 1.3825 | 1200 | 0.8971 | 0.8558 | 6.1987 | 8.4372 | 71.4451 |
61
+ | 84.3635 | 1.5553 | 1350 | 0.8969 | 0.8597 | 6.1119 | 8.4568 | 71.7744 |
62
+ | 69.7893 | 1.7281 | 1500 | 0.9072 | 0.8736 | 6.0104 | 8.2477 | 68.2638 |
63
+ | 61.8971 | 1.9009 | 1650 | 0.9092 | 0.8763 | 6.1341 | 8.2888 | 68.9466 |
64
+ | 42.5042 | 2.0737 | 1800 | 0.8997 | 0.8730 | 5.7658 | 7.9370 | 63.2745 |
65
+ | 39.7624 | 2.2465 | 1950 | 0.9107 | 0.8726 | 5.9121 | 8.1481 | 66.6196 |
66
+ | 41.98 | 2.4194 | 2100 | 0.9136 | 0.8830 | 5.6534 | 7.8299 | 61.5340 |
67
+ | 48.5888 | 2.5922 | 2250 | 0.9069 | 0.8794 | 5.7673 | 7.9777 | 63.8842 |
68
+ | 49.4607 | 2.7650 | 2400 | 0.9136 | 0.8835 | 5.6717 | 7.7749 | 60.6700 |
69
+ | 52.7909 | 2.9378 | 2550 | 0.9182 | 0.8907 | 5.7793 | 7.9226 | 62.9845 |
70
+ | 29.6541 | 3.1106 | 2700 | 0.9182 | 0.8910 | 5.5971 | 7.6659 | 58.9833 |
71
+ | 29.6989 | 3.2834 | 2850 | 0.9242 | 0.8964 | 5.6225 | 7.7525 | 60.3134 |
72
+ | 34.2387 | 3.4562 | 3000 | 0.9225 | 0.8959 | 5.6239 | 7.7780 | 60.7093 |
73
+ | 29.4395 | 3.6290 | 3150 | 0.9205 | 0.8945 | 5.6094 | 7.7657 | 60.5210 |
74
+ | 30.939 | 3.8018 | 3300 | 0.9231 | 0.8977 | 5.5582 | 7.6809 | 59.2081 |
75
+ | 43.7756 | 3.9747 | 3450 | 0.9202 | 0.8945 | 5.5838 | 7.7157 | 59.7454 |
76
+ | 21.8149 | 4.1475 | 3600 | 0.9225 | 0.8951 | 5.5411 | 7.6636 | 58.9366 |
77
+ | 28.7175 | 4.3203 | 3750 | 0.9213 | 0.8965 | 5.5546 | 7.6726 | 59.0829 |
78
+ | 28.0368 | 4.4931 | 3900 | 0.9245 | 0.8990 | 5.5760 | 7.7432 | 60.1640 |
79
+ | 24.2052 | 4.6659 | 4050 | 0.9165 | 0.8918 | 5.5538 | 7.6908 | 59.3667 |
80
+ | 24.5022 | 4.8387 | 4200 | 0.9245 | 0.8992 | 5.5598 | 7.7017 | 59.5255 |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.57.1
86
+ - Pytorch 2.9.0+cu130
87
+ - Datasets 4.4.1
88
+ - Tokenizers 0.22.1
config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "age_loss_weight": 1.0,
3
+ "architectures": [
4
+ "FacesMultiNet"
5
+ ],
6
+ "backbone_type": "efficientnetv2_s",
7
+ "dtype": "float32",
8
+ "gender_loss_weight": 1.0,
9
+ "model_type": "facesmultitasknet",
10
+ "num_age_labels": 1,
11
+ "num_gender_labels": 2,
12
+ "transformers_version": "4.57.1"
13
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf8c97ee54a5183ad0f22b8652490c18dfac3abb10131448a44a261a3234ad22
3
+ size 86664844
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d23c9795e3e4fc25e60cd9660ff9e338025c43eadf724c45e4bfbe1ee3ec65a
3
+ size 5841