Sisigoks commited on
Commit
c981222
Β·
verified Β·
1 Parent(s): 6e6c748

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -1
README.md CHANGED
@@ -15,4 +15,139 @@ tags:
15
  - plants
16
  - flora
17
  - 10K
18
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - plants
16
  - flora
17
  - 10K
18
+ ---
19
+
20
+ # 🌿 Sisigoks/FloraSense
21
+
22
+ **FloraSense** is a fine-tuned Vision Transformer (ViT) model designed for accurate classification of plant species and flora-related imagery. It builds on top of the powerful `google/vit-base-patch16-224` base model and is fine-tuned on the **Planter_GARDEN_EDITION** dataset curated by [Sisigoks](https://huggingface.co/Sisigoks), which includes over 10,000 diverse plant images.
23
+
24
+ ---
25
+
26
+ ## 🧠 Model Description
27
+
28
+ - **Architecture**: Vision Transformer (ViT)
29
+ - **Base Model**: [`google/vit-base-patch16-224`](https://huggingface.co/google/vit-base-patch16-224)
30
+ - **Task**: Image Classification
31
+ - **Use Case**: Automated plant and flora species recognition in digital botany, garden classification systems, plant care apps, biodiversity projects, and educational tools.
32
+
33
+ ---
34
+
35
+ ## πŸ“Š Model Performance
36
+
37
+ - **Evaluation Accuracy**: **35.46%**
38
+ - **Evaluation Loss**: 4.2894
39
+ - **Epochs Trained**: 10
40
+ - **Evaluation Speed**:
41
+ - 33.9 samples/sec
42
+ - 2.12 steps/sec
43
+
44
+ > ⚠️ While the accuracy may appear moderate, the model is handling over **10,000** highly similar plant species, making this a non-trivial challenge in fine-grained classification.
45
+
46
+ ---
47
+
48
+ ## πŸ§ͺ Training Procedure
49
+
50
+ | Hyperparameter | Value |
51
+ |-----------------------|----------------------------|
52
+ | Learning Rate | 5e-5 |
53
+ | Train Batch Size | 16 |
54
+ | Eval Batch Size | 16 |
55
+ | Gradient Accumulation | 4 |
56
+ | Total Effective Batch | 64 |
57
+ | Optimizer | Adam (Ξ²1=0.9, Ξ²2=0.999) |
58
+ | Scheduler | Linear w/ warmup (10%) |
59
+ | Epochs | 15 |
60
+ | Seed | 42 |
61
+
62
+ - **Framework**: PyTorch
63
+ - **Libraries**: Transformers 4.45.1, Datasets 3.0.1, Tokenizers 0.20.0
64
+
65
+ ---
66
+
67
+ ## πŸ“š Dataset
68
+
69
+ - **Name**: [`Sisigoks/Planter_GARDEN_EDITION`](https://huggingface.co/datasets/Sisigoks/Planter_GARDEN_EDITION)
70
+ - **Type**: Image Classification
71
+ - **Language**: English
72
+ - **Scope**: Over 10,000 unique plant and floral species
73
+ - **Format**: Real-world garden and nature photography
74
+ - **Use Case**: Realistic and diverse training scenarios for classification models
75
+
76
+ ---
77
+
78
+ ## βœ… Intended Use
79
+
80
+ ### Use Cases
81
+
82
+ - Botanical image recognition apps
83
+ - Educational tools for students and researchers
84
+ - Smart gardening & plant care solutions
85
+ - Field-use flora identification via AR and mobile apps
86
+
87
+ ### Target Users
88
+
89
+ - Botanists
90
+ - AI and ML researchers
91
+ - Gardeners and farmers
92
+ - Biology educators and students
93
+
94
+ ---
95
+
96
+ ## ⚠️ Limitations
97
+
98
+ - May confuse visually similar species due to fine-grained class diversity.
99
+ - Performance could degrade in poor lighting or occlusion-heavy environments.
100
+ - Biases may exist based on the geographic scope of the dataset (e.g., underrepresentation of tropical or rare plants).
101
+
102
+ ---
103
+
104
+ ## πŸ” Ethical Considerations
105
+
106
+ - **Accuracy**: Misclassification of medicinal/toxic plants can have real-world safety implications.
107
+ - **Bias**: Regional, lighting, or season-specific training data may skew predictions in certain environments.
108
+ - **Usage**: This is a research-grade model and should not be relied on for critical decisions without expert validation.
109
+
110
+ ---
111
+
112
+ ## πŸš€ How to Use
113
+
114
+ ``` python
115
+ from transformers import AutoImageProcessor, AutoModelForImageClassification
116
+ from PIL import Image
117
+ import torch
118
+
119
+ # Load model and processor
120
+ processor = AutoImageProcessor.from_pretrained("Sisigoks/FloraSense")
121
+ model = AutoModelForImageClassification.from_pretrained("Sisigoks/FloraSense")
122
+
123
+ # Load and preprocess image
124
+ image = Image.open("your_image.jpg")
125
+ inputs = processor(images=image, return_tensors="pt")
126
+
127
+ # Inference
128
+ with torch.no_grad():
129
+ outputs = model(**inputs)
130
+ logits = outputs.logits
131
+ predicted_label = logits.argmax(-1).item()
132
+
133
+ print(f"Predicted class ID: {predicted_label}")
134
+ ```
135
+
136
+ ## πŸ“„ Citation
137
+ If you use this model or dataset in your work, please cite:
138
+
139
+ ```
140
+ @misc{sisigoks_florasense_2025,
141
+ author = {Sisigoks},
142
+ title = {FloraSense: ViT-based Fine-Grained Plant Classifier},
143
+ year = {2025},
144
+ publisher = {Hugging Face},
145
+ howpublished = {\url{https://huggingface.co/Sisigoks/FloraSense}}
146
+ }
147
+ ```
148
+
149
+ ## πŸ™Œ Acknowledgements
150
+
151
+ - Hugging Face πŸ€— – for providing the model and dataset hosting infrastructure.
152
+ - Google Research – for the original ViT architecture that enabled scalable vision transformers.
153
+