Hemg commited on
Commit
b8ac19d
·
verified ·
1 Parent(s): daef782

Model save

Browse files
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224-in21k
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: Melanoma-Cancer-Image-Classification
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Melanoma-Cancer-Image-Classification
17
+
18
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.2301
21
+ - Accuracy: 0.9272
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 32
42
+ - eval_batch_size: 32
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 128
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 16
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
+ | 0.5231 | 1.0 | 74 | 0.2938 | 0.8822 |
57
+ | 0.2544 | 1.99 | 148 | 0.2562 | 0.8956 |
58
+ | 0.2214 | 2.99 | 222 | 0.2421 | 0.8910 |
59
+ | 0.1882 | 4.0 | 297 | 0.2090 | 0.9112 |
60
+ | 0.1584 | 5.0 | 371 | 0.2186 | 0.9125 |
61
+ | 0.1328 | 5.99 | 445 | 0.2061 | 0.9192 |
62
+ | 0.1123 | 6.99 | 519 | 0.2157 | 0.9184 |
63
+ | 0.0982 | 8.0 | 594 | 0.2007 | 0.9259 |
64
+ | 0.0868 | 9.0 | 668 | 0.2206 | 0.9297 |
65
+ | 0.0738 | 9.99 | 742 | 0.2263 | 0.9209 |
66
+ | 0.0666 | 10.99 | 816 | 0.2197 | 0.9268 |
67
+ | 0.0604 | 12.0 | 891 | 0.2050 | 0.9306 |
68
+ | 0.0527 | 13.0 | 965 | 0.2288 | 0.9259 |
69
+ | 0.0488 | 13.99 | 1039 | 0.2543 | 0.9251 |
70
+ | 0.0461 | 14.99 | 1113 | 0.2289 | 0.9322 |
71
+ | 0.0387 | 15.95 | 1184 | 0.2301 | 0.9272 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.38.2
77
+ - Pytorch 2.1.2
78
+ - Datasets 2.18.0
79
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:34b9aa610eebac765e8242caf12fa75f77ee7b2324d0af8fd05152855b72f52c
3
  size 343223968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75db7625081178ebb502842577c6bf1c15ef2f74037eceef75431005aa557d0a
3
  size 343223968
runs/Mar12_02-45-38_bcfc62ceb3a9/events.out.tfevents.1710211539.bcfc62ceb3a9.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c0456300e79c1e351e51b58d4ea7f2b7bcfc3432f315fe1dc2a371fde87dd4fd
3
- size 13157
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a40b6d1f2351f3ec7d3afa589b1d95c5deebda96a06c9de3b35f5a23fdd9368c
3
+ size 13511