TanmayNanda commited on
Commit
70862c4
Β·
verified Β·
1 Parent(s): 6315b77

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -3
README.md CHANGED
@@ -1,3 +1,137 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # LARS-MobileNet-V4
5
+
6
+ This repository contains the implementation of the lightweight convolutional neural network architecture described in the paper "Advancing Real-Time Crop Disease Detection on Edge Computing Devices using Lightweight Convolutional Neural Networks."
7
+
8
+ ## Overview
9
+
10
+ This project introduces LARS-MobileNetV4, an optimized version of MobileNetV4 specifically designed for real-time crop disease detection on resource-constrained edge devices such as Raspberry Pi. Our implementation achieves 97.84% accuracy on the Paddy Doctor dataset while maintaining fast inference times (88.91ms on Raspberry Pi 5), making it suitable for deployment in agricultural field settings.
11
+
12
+ ## Key Features
13
+
14
+ - **Optimized MobileNetV4 Architecture**: Enhanced with Squeeze-and-Excitation (SE) blocks and Efficient Channel Attention (ECA) mechanisms
15
+ - **Resource-Efficient Design**: Significantly reduced model size (10.2MB) compared to ResNet34 (85.3MB)
16
+ - **Real-time Performance**: Average inference time of 39ms on CPU and 88.91ms on Raspberry Pi 5
17
+ - **High Accuracy**: 97.84% detection accuracy across 12 common rice diseases
18
+ - **Custom Loss Function**: Combination of Focal Loss and Label Smoothing for better handling of class imbalance
19
+ - **Comprehensive Data Augmentation**: Robust augmentation pipeline to improve model generalization
20
+ - **Deployment-Ready**: Optimized for TFLite deployment on edge devices
21
+
22
+ ## Model Architecture
23
+
24
+ LARS-MobileNetV4 builds upon the recently introduced MobileNetV4 architecture with several key optimizations:
25
+
26
+ 1. **Universal Inverted Bottleneck (UIB)**: Merges features of Inverted Bottlenecks, ConvNext, and Feed Forward Networks to enhance flexibility in spatial and channel mixing
27
+ 2. **Mobile Multi-Query Attention (MQA)**: An accelerator-optimized attention mechanism that reduces memory bandwidth bottlenecks
28
+ 3. **Squeeze-and-Excitation Blocks**: Added to adaptively recalibrate channel-wise feature responses
29
+ 4. **Efficient Channel Attention**: Captures cross-channel interactions with minimal computational overhead
30
+ 5. **Neural Architecture Search (NAS)**: Tailored architecture for specific hardware
31
+
32
+ ## Performance Comparison
33
+
34
+ | Model | Parameters (M) | Accuracy (%) | Model Size (MB) | Inference Time on CPU (ms) | Inference Time on Raspberry Pi 5 (ms) |
35
+ | --------------------- | -------------- | ------------ | --------------- | -------------------------- | ------------------------------------- |
36
+ | ResNet34 | 21.79 | 97.50 | 85.3 | 148.93 | 264.50 |
37
+ | MobileNet-V2 | 3.5 | 92.42 | 9.2 | 40.00 | 73.09 |
38
+ | MobileNet-V3 | 2.5 | 95.62 | 10.3 | N/A | N/A |
39
+ | MobileNet-V4 | 3.8 | 97.17 | 10.2 | 39.20 | 88.91 |
40
+ | **LARS-MobileNet-V4** | **3.8** | **97.84** | **10.2** | **39.20** | **88.91** |
41
+
42
+ ## Training Strategies
43
+
44
+ Our implementation includes several optimization techniques:
45
+
46
+ | Model Variation | Train Accuracy (%) | Test Accuracy (%) |
47
+ | -------------------------------------------------------------------------------- | ------------------ | ----------------- |
48
+ | MobileNet-V4 Baseline | 99.93 | 97.17 |
49
+ | MobileNet-V4 (Augmentations) | 99.60 | 97.21 |
50
+ | MobileNet-V4 (FocalLabelSmoothingLoss) | 99.71 | 97.79 |
51
+ | MobileNet-V4 (Augmentations, FocalLabelSmoothingLoss, Squeeze-Excitation Blocks) | 99.68 | **97.84** |
52
+
53
+ ### Custom Loss Function
54
+
55
+ We implement a combination of Focal Loss and Label Smoothing:
56
+
57
+ 1. **Label Smoothing**: Redistributes confidence across classes
58
+
59
+ $$y_{smooth} = (1 - Ξ΅)y + Ξ΅/C$$
60
+
61
+ where Ξ΅ is the smoothing factor and C is the total number of classes.
62
+
63
+ 2. **Focal Loss**: Focuses on harder examples
64
+
65
+ $$L_{focal}(pt) = -Ξ±(1 - pt)^Ξ³ log(pt)$$
66
+
67
+ where pt is the predicted probability for the true class.
68
+
69
+ 3. **Combined Loss (FLS)**:
70
+
71
+ $$L_{FLS} = -Ξ±(1 - pt)^Ξ³ log(p_{smooth})$$
72
+
73
+ ## Requirements
74
+
75
+ ```
76
+ torch
77
+ torchvision
78
+ timm
79
+ numpy
80
+ pandas
81
+ Pillow
82
+ scikit-learn
83
+ tqdm
84
+ wandb
85
+ ```
86
+
87
+ ### Data Preparation
88
+
89
+ Organize your data as follows:
90
+
91
+ ```
92
+ β”œβ”€β”€ train_images/
93
+ β”‚ β”œβ”€β”€ disease_class_1/
94
+ β”‚ β”‚ β”œβ”€β”€ image1.jpg
95
+ β”‚ β”‚ β”œβ”€β”€ image2.jpg
96
+ β”‚ β”‚ └── ...
97
+ β”‚ β”œβ”€β”€ disease_class_2/
98
+ β”‚ └── ...
99
+ β”œβ”€β”€ test_images/
100
+ └── train.csv
101
+ ```
102
+
103
+ The train.csv file should contain:
104
+
105
+ - `image_id`: Filename of the image
106
+ - `label`: Disease class name
107
+
108
+ ### Configuration
109
+
110
+ Key hyperparameters can be modified at the top of the script:
111
+
112
+ ```python
113
+ LEARNING_RATE = 0.0001
114
+ ARCHITECTURE = "MobileNetV4"
115
+ EPOCHS = 50
116
+ BATCH_SIZE = 64
117
+ OPTIMISER = "Adam"
118
+ LOSS_FUNCTION = "FocalLabelSmoothingComboLoss"
119
+ NUM_CLASSES = 13 # 12 disease classes + 1 normal class
120
+ PRETRAINED = True
121
+ ```
122
+
123
+ ## Citation
124
+
125
+ If you use this code in your research, please cite our paper:
126
+
127
+ ```
128
+ @article{Nanda, T.R., Shukla, A., Srinivasa, T.R., Bhargava, J., Chauhan, S. (2025).
129
+ Advancing Real-Time Crop Disease Detection on Edge Computing Devices Using Lightweight Convolutional Neural Networks.
130
+ In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2025. Lecture Notes in Networks and Systems,
131
+ vol 1567. Springer, Cham. https://doi.org/10.1007/978-3-032-00071-2_33
132
+ }
133
+ ```
134
+
135
+ ## Acknowledgements
136
+
137
+ - We use the Paddy Doctor dataset for training and evaluation [Petchiammal et al., 2022]