Docty commited on
Commit
545909d
·
verified ·
1 Parent(s): a380bca

Model save

Browse files
Files changed (1) hide show
  1. README.md +41 -21
README.md CHANGED
@@ -1,42 +1,62 @@
1
  ---
2
- base_model: google/vit-base-patch16-224-in21k
3
  library_name: transformers
4
- license: creativeml-openrail-m
5
- inference: true
6
  tags:
7
- - image-classification
 
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the training script had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
 
13
 
14
- # Image Classification
 
 
 
15
 
16
- This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the Docty/Mangovariety dataset.
17
 
18
- You can find some example images in the following.
19
 
20
- ![img_0](./image_0.png)
21
- ![img_1](./image_1.png)
22
- ![img_2](./image_2.png)
23
- ![img_3](./image_3.png)
24
 
 
25
 
 
26
 
 
27
 
28
- ## Intended uses & limitations
 
 
29
 
30
- #### How to use
 
 
 
 
 
 
 
31
 
32
- ```python
33
- # TODO: add an example code snippet for running this diffusion pipeline
34
- ```
35
 
36
- #### Limitations and bias
 
 
 
37
 
38
- [TODO: provide examples of latent issues and potential remediations]
39
 
40
- ## Training details
41
 
42
- [TODO: describe the data used to train the model]
 
 
 
 
1
  ---
 
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/vit-base-patch16-224-in21k
5
  tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: mangoes
11
+ results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # mangoes
18
 
19
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.7385
22
+ - Accuracy: 0.9792
23
 
24
+ ## Model description
25
 
26
+ More information needed
27
 
28
+ ## Intended uses & limitations
 
 
 
29
 
30
+ More information needed
31
 
32
+ ## Training and evaluation data
33
 
34
+ More information needed
35
 
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
 
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 2e-05
42
+ - train_batch_size: 8
43
+ - eval_batch_size: 8
44
+ - seed: 1337
45
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
+ - lr_scheduler_type: linear
47
+ - num_epochs: 2.0
48
 
49
+ ### Training results
 
 
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 1.0281 | 1.0 | 170 | 1.0490 | 0.9583 |
54
+ | 0.7454 | 2.0 | 340 | 0.7385 | 0.9792 |
55
 
 
56
 
57
+ ### Framework versions
58
 
59
+ - Transformers 4.56.1
60
+ - Pytorch 2.8.0+cu126
61
+ - Datasets 4.0.0
62
+ - Tokenizers 0.22.0