Pamreth commited on
Commit
5d6536a
·
verified ·
1 Parent(s): b64af7b

Model save

Browse files
Files changed (2) hide show
  1. README.md +19 -12
  2. model.safetensors +1 -1
README.md CHANGED
@@ -3,7 +3,6 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: facebook/deit-base-distilled-patch16-224
5
  tags:
6
- - image-classification
7
  - generated_from_trainer
8
  metrics:
9
  - accuracy
@@ -17,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # deit-ena24
19
 
20
- This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the ena24 dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.1831
23
- - Accuracy: 0.9542
24
 
25
  ## Model description
26
 
@@ -45,20 +44,28 @@ The following hyperparameters were used during training:
45
  - seed: 42
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
- - num_epochs: 1
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
  |:-------------:|:------:|:----:|:---------------:|:--------:|
55
- | 1.3013 | 0.1302 | 100 | 0.9782 | 0.7145 |
56
- | 0.8173 | 0.2604 | 200 | 0.5563 | 0.8557 |
57
- | 0.3854 | 0.3906 | 300 | 0.5591 | 0.8290 |
58
- | 0.4819 | 0.5208 | 400 | 0.4213 | 0.8916 |
59
- | 0.5078 | 0.6510 | 500 | 0.3100 | 0.9145 |
60
- | 0.3561 | 0.7812 | 600 | 0.2305 | 0.9359 |
61
- | 0.1739 | 0.9115 | 700 | 0.1831 | 0.9542 |
 
 
 
 
 
 
 
 
62
 
63
 
64
  ### Framework versions
 
3
  license: apache-2.0
4
  base_model: facebook/deit-base-distilled-patch16-224
5
  tags:
 
6
  - generated_from_trainer
7
  metrics:
8
  - accuracy
 
16
 
17
  # deit-ena24
18
 
19
+ This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0999
22
+ - Accuracy: 0.9763
23
 
24
  ## Model description
25
 
 
44
  - seed: 42
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 2
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
+ | 1.2545 | 0.1302 | 100 | 0.9921 | 0.7122 |
55
+ | 0.8766 | 0.2604 | 200 | 0.5768 | 0.8443 |
56
+ | 0.5148 | 0.3906 | 300 | 0.4472 | 0.8618 |
57
+ | 0.4511 | 0.5208 | 400 | 0.4786 | 0.8779 |
58
+ | 0.4874 | 0.6510 | 500 | 0.4083 | 0.8863 |
59
+ | 0.5794 | 0.7812 | 600 | 0.3513 | 0.8977 |
60
+ | 0.3324 | 0.9115 | 700 | 0.2395 | 0.9282 |
61
+ | 0.0975 | 1.0417 | 800 | 0.2091 | 0.9473 |
62
+ | 0.0579 | 1.1719 | 900 | 0.1919 | 0.9420 |
63
+ | 0.2113 | 1.3021 | 1000 | 0.1756 | 0.9611 |
64
+ | 0.0301 | 1.4323 | 1100 | 0.1412 | 0.9664 |
65
+ | 0.0534 | 1.5625 | 1200 | 0.1346 | 0.9687 |
66
+ | 0.0868 | 1.6927 | 1300 | 0.1292 | 0.9687 |
67
+ | 0.0623 | 1.8229 | 1400 | 0.1086 | 0.9763 |
68
+ | 0.1078 | 1.9531 | 1500 | 0.0999 | 0.9763 |
69
 
70
 
71
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1149beae6e4ce25e86256b988a49f45518e813fb328d526414f4daf6d2ab7bd4
3
  size 343291936
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:706cc1f3b47cee868f603c119c70281180838a7eed89181ed48580f7cc9e30b0
3
  size 343291936