File size: 1,040 Bytes
a6e009c
545909d
d28cee0
 
 
a6e009c
d28cee0
a6e009c
 
d28cee0
a6e009c
 
 
d28cee0
a6e009c
d28cee0
a6e009c
d28cee0
a6e009c
d28cee0
 
 
 
 
 
 
 
a6e009c
 
 
 
d28cee0
a6e009c
d28cee0
a6e009c
d28cee0
 
 
a6e009c
d28cee0
a6e009c
d28cee0
a6e009c
d28cee0
a6e009c
d28cee0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
base_model: google/vit-base-patch16-224-in21k
library_name: transformers
license: creativeml-openrail-m
inference: true
tags:
- image-classification
---

<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->


# Image Classification

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the Docty/Mangovariety dataset. 

You can find some example images in the following. 

![img_0](./image_0.png)
![img_1](./image_1.png)
![img_2](./image_2.png)
![img_3](./image_3.png)
![img_4](./image_4.png)
![img_5](./image_5.png)
![img_6](./image_6.png)
![img_7](./image_7.png)




## Intended uses & limitations

#### How to use

```python
# TODO: add an example code snippet for running this diffusion pipeline
```

#### Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

## Training details

[TODO: describe the data used to train the model]