LindiSimon commited on
Commit
ba3addf
·
verified ·
1 Parent(s): 20595ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -3
README.md CHANGED
@@ -1,3 +1,51 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - vit
5
+ - image-classification
6
+ - beans
7
+ - transfer-learning
8
+ ---
9
+
10
+ # ViT Beans Model
11
+
12
+ This model was fine-tuned using transfer learning on the ["beans"](https://huggingface.co/datasets/beans) dataset from the Hugging Face Datasets Hub.
13
+ It classifies bean plant leaves into the following categories:
14
+
15
+ - `LABEL_0`: angular_leaf_spot
16
+ - `LABEL_1`: bean_rust
17
+ - `LABEL_2`: healthy
18
+
19
+ ## Model architecture
20
+
21
+ The base model is `google/vit-base-patch16-224`.
22
+
23
+ ## Training
24
+
25
+ Transfer learning was used with a ViT model pre-trained on ImageNet-21k.
26
+
27
+ ## Evaluation
28
+
29
+ This model was compared to a zero-shot classification using CLIP (`openai/clip-vit-base-patch32`).
30
+
31
+ ### Zero-Shot Results on Oxford Pets (as required):
32
+
33
+ - **Accuracy**: 0.9993189573287964
34
+ - **Precision**: 0.5794700118713081
35
+ - **Recall**: 0.10156987264053896
36
+ - **Model used**: `openai/clip-vit-base-patch32`
37
+
38
+ ## Example
39
+
40
+ ```python
41
+ from transformers import ViTFeatureExtractor, ViTForImageClassification
42
+ from PIL import Image
43
+ import torch
44
+
45
+ image = Image.open("example_input.png")
46
+ extractor = ViTFeatureExtractor.from_pretrained("LindiSimon/vit-beans-model")
47
+ inputs = extractor(images=image, return_tensors="pt")
48
+ model = ViTForImageClassification.from_pretrained("LindiSimon/vit-beans-model")
49
+ with torch.no_grad():
50
+ logits = model(**inputs).logits
51
+ predicted_class = logits.argmax(-1).item()