corranm commited on
Commit
e976c8f
·
verified ·
1 Parent(s): 893e1e2

End of training

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/vit-base-patch16-224-in21k
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: test_model_7
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # test_model_7
18
+
19
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 1.8939
22
+ - F1 Macro: 0.0651
23
+ - F1 Micro: 0.2045
24
+ - F1 Weighted: 0.0913
25
+ - Precision Macro: 0.0760
26
+ - Precision Micro: 0.2045
27
+ - Precision Weighted: 0.1037
28
+ - Recall Macro: 0.1437
29
+ - Recall Micro: 0.2045
30
+ - Recall Weighted: 0.2045
31
+ - Accuracy: 0.2045
32
+
33
+ ## Model description
34
+
35
+ More information needed
36
+
37
+ ## Intended uses & limitations
38
+
39
+ More information needed
40
+
41
+ ## Training and evaluation data
42
+
43
+ More information needed
44
+
45
+ ## Training procedure
46
+
47
+ ### Training hyperparameters
48
+
49
+ The following hyperparameters were used during training:
50
+ - learning_rate: 5e-05
51
+ - train_batch_size: 32
52
+ - eval_batch_size: 32
53
+ - seed: 42
54
+ - gradient_accumulation_steps: 4
55
+ - total_train_batch_size: 128
56
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
57
+ - lr_scheduler_type: linear
58
+ - lr_scheduler_warmup_ratio: 0.1
59
+ - num_epochs: 2
60
+
61
+ ### Training results
62
+
63
+ | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | Precision Macro | Precision Micro | Precision Weighted | Recall Macro | Recall Micro | Recall Weighted | Accuracy |
64
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------------:|:---------------:|:------------------:|:------------:|:------------:|:---------------:|:--------:|
65
+ | No log | 0.8 | 3 | 1.9112 | 0.0464 | 0.1894 | 0.0664 | 0.0281 | 0.1894 | 0.0403 | 0.1323 | 0.1894 | 0.1894 | 0.1894 |
66
+ | No log | 1.8 | 6 | 1.8938 | 0.0654 | 0.2045 | 0.0917 | 0.0762 | 0.2045 | 0.1040 | 0.1437 | 0.2045 | 0.2045 | 0.2045 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.48.1
72
+ - Pytorch 2.5.1+cu124
73
+ - Datasets 3.2.0
74
+ - Tokenizers 0.21.0