Umsakwa commited on
Commit
ad7c20e
·
verified ·
1 Parent(s): bf25767

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -187
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  tags:
4
- - image-classificaiton
5
  - vit
6
  - pytorch
7
  license: apache-2.0
@@ -12,224 +12,77 @@ metrics:
12
  - f1
13
  ---
14
 
15
-
16
  # Umsakwa/Uddayvit-image-classification-model
17
 
18
- This is a Vision Transformer (ViT)-based model fine-tuned for **image classification tasks**. It classifies images into predefined categories and is suitable for various real-world use cases, including object detection, plant disease identification, and more.
19
-
20
-
21
 
 
 
 
22
 
23
  ## Model Details
24
 
25
- - **Model Architecture**: Vision Transformer (ViT)
 
26
  - **Framework**: PyTorch
27
- - **Training Data**: The model was trained on [Your Dataset Name]. Include details such as the dataset size, number of classes, and source (e.g., public dataset on Hugging Face or custom dataset).
28
- - **Dataset Link**: [Dataset on Hugging Face](https://huggingface.co/datasets/your-dataset-name)
29
- - **Input Data**: The model accepts RGB images in standard formats (e.g., JPEG, PNG) and preprocesses them to the required input size (e.g., 224x224).
30
- - **Preprocessing**: The model uses a processor that tokenizes and normalizes the input images.
31
- -
32
- ### Model Description
33
-
34
- <!-- Provide a longer summary of what this model is. -->
35
 
36
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
37
 
38
- - **Developed by:** [More Information Needed]
39
- - **Funded by [optional]:** [More Information Needed]
40
- - **Shared by [optional]:** [More Information Needed]
41
- - **Model type:** [More Information Needed]
42
- - **Language(s) (NLP):** [More Information Needed]
43
- - **License:** [More Information Needed]
44
- - **Finetuned from model [optional]:** [More Information Needed]
45
 
46
- ### Model Sources [optional]
 
 
 
47
 
48
- <!-- Provide the basic links for the model. -->
49
 
50
- - **Repository:** [More Information Needed]
51
- - **Paper [optional]:** [More Information Needed]
52
- - **Demo [optional]:** [More Information Needed]
53
 
54
  ## Uses
55
 
56
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
57
-
58
  ### Direct Use
59
 
60
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
61
-
62
- [More Information Needed]
63
 
64
- ### Downstream Use [optional]
65
 
66
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
67
-
68
- [More Information Needed]
69
 
70
  ### Out-of-Scope Use
71
 
72
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
73
-
74
- [More Information Needed]
75
 
76
  ## Bias, Risks, and Limitations
77
 
78
- The model’s performance is tied to the quality of the training dataset. For datasets significantly different from the one used for training, fine-tuning might be required.
79
- It is not robust to extreme distortions, occlusions, or very low-resolution images.
80
- The model may have biases inherited from the dataset.
81
-
82
- [More Information Needed]
83
 
84
  ### Recommendations
85
 
86
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
87
-
88
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
89
 
90
  ## How to Get Started with the Model
91
 
92
- Use the code below to get started with the model.
93
-
94
- [More Information Needed]
95
-
96
- ## Training Details
97
-
98
- Frameworks Used:
99
-
100
- Transformers (Hugging Face)
101
- PyTorch
102
- Hyperparameters:
103
-
104
- Epochs: 5
105
- Batch Size: 16
106
- Learning Rate: 5e-5
107
- Optimizer: AdamW
108
-
109
- Loss Function: Cross-Entropy Loss
110
-
111
- Hardware Used:
112
-
113
- GPU: NVIDIA Tesla T4
114
- ### Training Data
115
-
116
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
117
-
118
- [More Information Needed]
119
-
120
- ### Training Procedure
121
-
122
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
123
-
124
- #### Preprocessing [optional]
125
-
126
- [More Information Needed]
127
-
128
-
129
- #### Training Hyperparameters
130
-
131
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
132
-
133
- #### Speeds, Sizes, Times [optional]
134
-
135
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
136
-
137
- [More Information Needed]
138
-
139
- ## Evaluation
140
-
141
- <!-- This section describes the evaluation protocols and provides the results. -->
142
-
143
- ### Testing Data, Factors & Metrics
144
-
145
- #### Testing Data
146
-
147
- <!-- This should link to a Dataset Card if possible. -->
148
-
149
- [More Information Needed]
150
-
151
- #### Factors
152
-
153
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
154
-
155
- [More Information Needed]
156
-
157
- #### Metrics
158
-
159
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
160
-
161
- [More Information Needed]
162
-
163
- ### Results
164
-
165
- [More Information Needed]
166
-
167
- #### Summary
168
-
169
-
170
-
171
- ## Model Examination [optional]
172
-
173
- <!-- Relevant interpretability work for the model goes here -->
174
-
175
- [More Information Needed]
176
-
177
- ## Environmental Impact
178
-
179
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
180
-
181
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
182
-
183
- - **Hardware Type:** [More Information Needed]
184
- - **Hours used:** [More Information Needed]
185
- - **Cloud Provider:** [More Information Needed]
186
- - **Compute Region:** [More Information Needed]
187
- - **Carbon Emitted:** [More Information Needed]
188
-
189
- ## Technical Specifications [optional]
190
-
191
- ### Model Architecture and Objective
192
-
193
- [More Information Needed]
194
-
195
- ### Compute Infrastructure
196
-
197
- [More Information Needed]
198
-
199
- #### Hardware
200
-
201
- [More Information Needed]
202
-
203
- #### Software
204
-
205
- [More Information Needed]
206
-
207
- ## Citation [optional]
208
-
209
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
210
-
211
- **BibTeX:**
212
-
213
- [More Information Needed]
214
-
215
- **APA:**
216
-
217
- [More Information Needed]
218
-
219
- ## Glossary [optional]
220
-
221
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
222
-
223
- [More Information Needed]
224
-
225
- ## More Information [optional]
226
-
227
- [More Information Needed]
228
 
229
- ## Model Card Authors [optional]
 
230
 
231
- [More Information Needed]
 
 
232
 
233
- ## Model Card Contact
 
234
 
235
- [More Information Needed]
 
 
 
1
  ---
2
  library_name: transformers
3
  tags:
4
+ - image-classification
5
  - vit
6
  - pytorch
7
  license: apache-2.0
 
12
  - f1
13
  ---
14
 
 
15
  # Umsakwa/Uddayvit-image-classification-model
16
 
17
+ This Vision Transformer (ViT) model has been fine-tuned for image classification tasks on the [Beans Dataset](https://huggingface.co/datasets/beans), which consists of images of beans categorized into three classes:
 
 
18
 
19
+ - **Angular Leaf Spot**
20
+ - **Bean Rust**
21
+ - **Healthy**
22
 
23
  ## Model Details
24
 
25
+ - **Architecture**: Vision Transformer (ViT)
26
+ - **Base Model**: `google/vit-base-patch16-224-in21k`
27
  - **Framework**: PyTorch
28
+ - **Task**: Image Classification
29
+ - **Labels**: 3 (angular_leaf_spot, bean_rust, healthy)
30
+ - **Input Shape**: 224x224 RGB images
31
+ - **Training Dataset**: [Beans Dataset](https://huggingface.co/datasets/beans)
32
+ - **Fine-Tuning**: The model was fine-tuned on the Beans dataset to classify plant diseases in beans.
 
 
 
33
 
34
+ ### Model Description
35
 
36
+ The model uses the ViT architecture, which processes image patches using a transformer-based approach. It has been trained to classify bean diseases with high accuracy. This makes it particularly useful for agricultural applications, such as early disease detection and plant health monitoring.
 
 
 
 
 
 
37
 
38
+ - **Developed by**: Udday (Umsakwa)
39
+ - **Language(s)**: N/A (Image-based)
40
+ - **License**: Apache-2.0
41
+ - **Finetuned from**: `google/vit-base-patch16-224-in21k`
42
 
43
+ ### Model Sources
44
 
45
+ - **Repository**: [Umsakwa/Uddayvit-image-classification-model](https://huggingface.co/Umsakwa/Uddayvit-image-classification-model)
 
 
46
 
47
  ## Uses
48
 
 
 
49
  ### Direct Use
50
 
51
+ This model can be directly used for classifying bean leaf images into one of three categories: angular leaf spot, bean rust, or healthy.
 
 
52
 
53
+ ### Downstream Use
54
 
55
+ The model may also be fine-tuned further for similar agricultural image classification tasks or integrated into larger plant health monitoring systems.
 
 
56
 
57
  ### Out-of-Scope Use
58
 
59
+ - The model is not suitable for non-agricultural image classification tasks without further fine-tuning.
60
+ - Not robust to extreme distortions, occlusions, or very low-resolution images.
 
61
 
62
  ## Bias, Risks, and Limitations
63
 
64
+ - **Bias**: The dataset may contain biases due to specific environmental or geographic conditions of the sampled plants.
65
+ - **Limitations**: Performance may degrade on datasets significantly different from the training dataset.
 
 
 
66
 
67
  ### Recommendations
68
 
69
+ - Users should ensure the model is evaluated on their specific dataset before deployment.
70
+ - Additional fine-tuning may be required for domain-specific applications.
 
71
 
72
  ## How to Get Started with the Model
73
 
74
+ To use this model for inference:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
+ ```python
77
+ from transformers import ViTForImageClassification, ViTImageProcessor
78
 
79
+ # Load model and processor
80
+ model = ViTForImageClassification.from_pretrained("Umsakwa/Uddayvit-image-classification-model")
81
+ processor = ViTImageProcessor.from_pretrained("Umsakwa/Uddayvit-image-classification-model")
82
 
83
+ # Prepare an image
84
+ image = processor(images="path_to_image.jpg", return_tensors="pt")
85
 
86
+ # Run inference
87
+ outputs = model(**image)
88
+ predictions = outputs.logits.argmax(-1)