DeepActionPotential commited on
Commit
38f41bb
·
verified ·
1 Parent(s): 6bb078a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -134
README.md CHANGED
@@ -1,134 +1,146 @@
1
- # Retina Blood Vessel Segmentation
2
-
3
- ## About the Project
4
-
5
- This project focuses on the automatic segmentation of blood vessels in retinal fundus images using deep learning. Accurate vessel segmentation is crucial for diagnosing and monitoring various ophthalmic diseases, such as diabetic retinopathy, glaucoma, and hypertensive retinopathy. The project leverages state-of-the-art convolutional neural networks to perform pixel-wise classification, distinguishing vessel structures from the background. The solution is designed for both research and practical clinical applications, providing robust and efficient segmentation results.
6
-
7
- The repository contains:
8
- - A Jupyter notebook for end-to-end training, evaluation, and visualization.
9
- - A Streamlit web application for interactive inference on new images.
10
- - Pretrained model weights and demo media for quick experimentation.
11
-
12
- ---
13
-
14
- ## About the Dataset
15
-
16
- The model is trained and evaluated on the [Retina Blood Vessel dataset](https://www.kaggle.com/datasets/abdallahwagih/retina-blood-vessel/data) from Kaggle. This dataset consists of high-resolution color fundus images and their corresponding binary masks, where vessel pixels are annotated by experts.
17
-
18
- **Dataset Structure:**
19
- - `image/`: Contains original RGB fundus images.
20
- - `mask/`: Contains ground truth binary masks for vessel segmentation.
21
-
22
- **Key Characteristics:**
23
- - Images vary in illumination, contrast, and vessel visibility.
24
- - Vessel pixels are a small fraction of the total image area, leading to class imbalance.
25
- - The dataset is split into training and testing sets for model development and evaluation.
26
-
27
- ---
28
-
29
- ## Notebook Summary
30
-
31
- The provided notebook (`retina-blood-vessel-segmentation-f1-score-of-80.ipynb`) guides users through the entire workflow:
32
- 1. **Problem Definition:** Outlines the clinical motivation and technical challenges.
33
- 2. **Data Preparation:** Loads images and masks, applies preprocessing (resizing, normalization), and splits data into training and validation sets.
34
- 3. **Model Selection:** Utilizes a U-Net architecture with a ResNet34 encoder pretrained on ImageNet for effective feature extraction.
35
- 4. **Loss Function & Optimizer:** Combines Binary Cross Entropy and Dice Loss to address class imbalance and improve segmentation accuracy.
36
- 5. **Training:** Implements training and validation loops with progress monitoring and checkpointing.
37
- 6. **Evaluation:** Computes metrics (F1, IoU, Precision, Recall, Accuracy) and visualizes predictions alongside ground truth.
38
- 7. **Saving:** Exports the trained model for deployment.
39
-
40
- The notebook is modular, well-commented, and suitable for both educational and research purposes.
41
-
42
- ---
43
-
44
- ## Model Results
45
-
46
- ### Preprocessing
47
-
48
- - **Image Normalization:** All images are scaled to [0, 1] and resized to 512x512 pixels to standardize input dimensions.
49
- - **Mask Processing:** Masks are binarized and reshaped to match the model's output.
50
- - **Augmentation:** (Optional) Techniques such as flipping, rotation, and brightness adjustment can be applied to improve generalization.
51
-
52
- ### Training
53
-
54
- - **Architecture:** U-Net with a ResNet34 encoder, leveraging pretrained weights for faster convergence and better feature extraction.
55
- - **Loss Function:** A combination of Binary Cross Entropy and Dice Loss is used to handle class imbalance and encourage overlap between predicted and true vessel regions.
56
- - **Optimizer:** Adam optimizer with a learning rate scheduler (ReduceLROnPlateau) to adaptively reduce learning rate on validation loss plateaus.
57
- - **Epochs:** Trained for 50 epochs with early stopping based on validation loss.
58
-
59
- ### Evaluation
60
-
61
- - **Metrics:** The model is evaluated using F1 Score, Jaccard Index (IoU), Precision, Recall, and Accuracy.
62
- - **Results:** Achieved an F1 score of **80%** on the test set, indicating strong performance in segmenting fine vessel structures.
63
- - **Visualization:** The notebook provides side-by-side comparisons of original images, ground truth masks, and model predictions for qualitative assessment.
64
-
65
- ---
66
-
67
- ## How to Install
68
-
69
- Follow these steps to set up the environment using Python's `venv`:
70
-
71
- ```bash
72
- # Clone the repository
73
- git clone https://github.com/DeepActionPotential/ReSegNet
74
- cd ReSegNet
75
-
76
- # Create a virtual environment
77
- python -m venv venv
78
-
79
- # Activate the virtual environment
80
- # On Windows:
81
- venv\Scripts\activate
82
- # On macOS/Linux:
83
- source venv/bin/activate
84
-
85
-
86
- # Install required packages
87
- pip install -r requirements.txt
88
- ```
89
-
90
- ---
91
-
92
- ## How to Use the Software
93
-
94
-
95
- ### Web Demo
96
-
97
- 1. Ensure the trained model weights are available in the `models/` directory.
98
- 2. Run the Streamlit app:
99
- ```bash
100
- streamlit run app.py
101
- ```
102
- 3. Upload a retinal image through the web interface and click "Run Segmentation" to see the predicted vessel mask.
103
-
104
- ### Demo Media
105
-
106
- ## [demo-video](demo/ReSegNet-demo.mp4)
107
-
108
- ![image-before](demo/befire.png)
109
- ![image-after](demo/after.jpg)
110
-
111
- ---
112
-
113
- ## Technologies Used
114
-
115
- ### Model Training
116
-
117
- - **PyTorch:** Core deep learning framework for model definition, training, and evaluation.
118
- - **segmentation-models-pytorch:** Provides high-level implementations of popular segmentation architectures (e.g., U-Net, FPN) with pretrained encoders.
119
- - **OpenCV & NumPy:** For image processing, augmentation, and efficient data handling.
120
- - **Matplotlib:** Visualization of images, masks, and results.
121
- - **scikit-learn:** Calculation of evaluation metrics (F1, IoU, Precision, Recall, Accuracy).
122
-
123
- ### Deployment
124
-
125
- - **Streamlit:** Rapid development of interactive web applications for model inference and visualization.
126
- - **Pillow:** Image loading and preprocessing in the web app.
127
-
128
- These technologies ensure a robust, reproducible, and user-friendly workflow from model development to deployment.
129
-
130
- ---
131
-
132
- ## License
133
-
134
- This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ ---
4
+ title: ReSegNet - Retina Blood Vessel Segmentation
5
+ emoji: 🤖
6
+ colorFrom: indigo
7
+ colorTo: blue
8
+ sdk: streamlit
9
+ sdk_version: 1.30.0
10
+ app_file: app.py
11
+ pinned: false
12
+ license: mit
13
+ ---
14
+
15
+ ## About the Project
16
+
17
+ This project focuses on the automatic segmentation of blood vessels in retinal fundus images using deep learning. Accurate vessel segmentation is crucial for diagnosing and monitoring various ophthalmic diseases, such as diabetic retinopathy, glaucoma, and hypertensive retinopathy. The project leverages state-of-the-art convolutional neural networks to perform pixel-wise classification, distinguishing vessel structures from the background. The solution is designed for both research and practical clinical applications, providing robust and efficient segmentation results.
18
+
19
+ The repository contains:
20
+ - A Jupyter notebook for end-to-end training, evaluation, and visualization.
21
+ - A Streamlit web application for interactive inference on new images.
22
+ - Pretrained model weights and demo media for quick experimentation.
23
+
24
+ ---
25
+
26
+ ## About the Dataset
27
+
28
+ The model is trained and evaluated on the [Retina Blood Vessel dataset](https://www.kaggle.com/datasets/abdallahwagih/retina-blood-vessel/data) from Kaggle. This dataset consists of high-resolution color fundus images and their corresponding binary masks, where vessel pixels are annotated by experts.
29
+
30
+ **Dataset Structure:**
31
+ - `image/`: Contains original RGB fundus images.
32
+ - `mask/`: Contains ground truth binary masks for vessel segmentation.
33
+
34
+ **Key Characteristics:**
35
+ - Images vary in illumination, contrast, and vessel visibility.
36
+ - Vessel pixels are a small fraction of the total image area, leading to class imbalance.
37
+ - The dataset is split into training and testing sets for model development and evaluation.
38
+
39
+ ---
40
+
41
+ ## Notebook Summary
42
+
43
+ The provided notebook (`retina-blood-vessel-segmentation-f1-score-of-80.ipynb`) guides users through the entire workflow:
44
+ 1. **Problem Definition:** Outlines the clinical motivation and technical challenges.
45
+ 2. **Data Preparation:** Loads images and masks, applies preprocessing (resizing, normalization), and splits data into training and validation sets.
46
+ 3. **Model Selection:** Utilizes a U-Net architecture with a ResNet34 encoder pretrained on ImageNet for effective feature extraction.
47
+ 4. **Loss Function & Optimizer:** Combines Binary Cross Entropy and Dice Loss to address class imbalance and improve segmentation accuracy.
48
+ 5. **Training:** Implements training and validation loops with progress monitoring and checkpointing.
49
+ 6. **Evaluation:** Computes metrics (F1, IoU, Precision, Recall, Accuracy) and visualizes predictions alongside ground truth.
50
+ 7. **Saving:** Exports the trained model for deployment.
51
+
52
+ The notebook is modular, well-commented, and suitable for both educational and research purposes.
53
+
54
+ ---
55
+
56
+ ## Model Results
57
+
58
+ ### Preprocessing
59
+
60
+ - **Image Normalization:** All images are scaled to [0, 1] and resized to 512x512 pixels to standardize input dimensions.
61
+ - **Mask Processing:** Masks are binarized and reshaped to match the model's output.
62
+ - **Augmentation:** (Optional) Techniques such as flipping, rotation, and brightness adjustment can be applied to improve generalization.
63
+
64
+ ### Training
65
+
66
+ - **Architecture:** U-Net with a ResNet34 encoder, leveraging pretrained weights for faster convergence and better feature extraction.
67
+ - **Loss Function:** A combination of Binary Cross Entropy and Dice Loss is used to handle class imbalance and encourage overlap between predicted and true vessel regions.
68
+ - **Optimizer:** Adam optimizer with a learning rate scheduler (ReduceLROnPlateau) to adaptively reduce learning rate on validation loss plateaus.
69
+ - **Epochs:** Trained for 50 epochs with early stopping based on validation loss.
70
+
71
+ ### Evaluation
72
+
73
+ - **Metrics:** The model is evaluated using F1 Score, Jaccard Index (IoU), Precision, Recall, and Accuracy.
74
+ - **Results:** Achieved an F1 score of **80%** on the test set, indicating strong performance in segmenting fine vessel structures.
75
+ - **Visualization:** The notebook provides side-by-side comparisons of original images, ground truth masks, and model predictions for qualitative assessment.
76
+
77
+ ---
78
+
79
+ ## How to Install
80
+
81
+ Follow these steps to set up the environment using Python's `venv`:
82
+
83
+ ```bash
84
+ # Clone the repository
85
+ git clone https://github.com/DeepActionPotential/ReSegNet
86
+ cd ReSegNet
87
+
88
+ # Create a virtual environment
89
+ python -m venv venv
90
+
91
+ # Activate the virtual environment
92
+ # On Windows:
93
+ venv\Scripts\activate
94
+ # On macOS/Linux:
95
+ source venv/bin/activate
96
+
97
+
98
+ # Install required packages
99
+ pip install -r requirements.txt
100
+ ```
101
+
102
+ ---
103
+
104
+ ## How to Use the Software
105
+
106
+
107
+ ### Web Demo
108
+
109
+ 1. Ensure the trained model weights are available in the `models/` directory.
110
+ 2. Run the Streamlit app:
111
+ ```bash
112
+ streamlit run app.py
113
+ ```
114
+ 3. Upload a retinal image through the web interface and click "Run Segmentation" to see the predicted vessel mask.
115
+
116
+ ### Demo Media
117
+
118
+ ## [demo-video](demo/ReSegNet-demo.mp4)
119
+
120
+ ![image-before](demo/befire.png)
121
+ ![image-after](demo/after.jpg)
122
+
123
+ ---
124
+
125
+ ## Technologies Used
126
+
127
+ ### Model Training
128
+
129
+ - **PyTorch:** Core deep learning framework for model definition, training, and evaluation.
130
+ - **segmentation-models-pytorch:** Provides high-level implementations of popular segmentation architectures (e.g., U-Net, FPN) with pretrained encoders.
131
+ - **OpenCV & NumPy:** For image processing, augmentation, and efficient data handling.
132
+ - **Matplotlib:** Visualization of images, masks, and results.
133
+ - **scikit-learn:** Calculation of evaluation metrics (F1, IoU, Precision, Recall, Accuracy).
134
+
135
+ ### Deployment
136
+
137
+ - **Streamlit:** Rapid development of interactive web applications for model inference and visualization.
138
+ - **Pillow:** Image loading and preprocessing in the web app.
139
+
140
+ These technologies ensure a robust, reproducible, and user-friendly workflow from model development to deployment.
141
+
142
+ ---
143
+
144
+ ## License
145
+
146
+ This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.