Upload README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## CVRP: A Rice Image Dataset with High-Quality Annotations for Image Segmentation and Plant Phenomics Research
|
| 2 |
+
|
| 3 |
+
##### Multi-cultivar and multi-view rice plant image dataset (CVRP) consists of 2,303 field images with their annotated masks and 123 indoor images of individual panicles.
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
### Annotation Workflow
|
| 8 |
+
##### To optimize the process of annotation, we combine deep learning methods with manual curation. The workflow comprises two stages: manual annotation and model-based prediction followed by manual curation.
|
| 9 |
+

|
| 10 |
+
### Getting Started
|
| 11 |
+
##### Recommend python 3.7, CUDA v11.3 and Pytorch 1.10.0.
|
| 12 |
+
***Clone MMSegmentation***
|
| 13 |
+
|
| 14 |
+
```bash
|
| 15 |
+
git clone https://github.com/open-mmlab/mmsegmentation.git -b v1.1.2
|
| 16 |
+
|
| 17 |
+
pip install -U openmim
|
| 18 |
+
mim install mmengine
|
| 19 |
+
mim install mmcv==2.0.0
|
| 20 |
+
|
| 21 |
+
pip install -r run/requirements.txt
|
| 22 |
+
cd mmsegmentation
|
| 23 |
+
pip install -v -e .
|
| 24 |
+
```
|
| 25 |
+
***Creating a Dataset***
|
| 26 |
+
|
| 27 |
+
1. Directory structure of the dataset:
|
| 28 |
+
|
| 29 |
+
π CVRPDataset/
|
| 30 |
+
ββπ images/
|
| 31 |
+
ββπ labelme_jsons/
|
| 32 |
+
|
| 33 |
+
2. Convert labelme files to mask:
|
| 34 |
+
```bash
|
| 35 |
+
python run/labelme2mask.py
|
| 36 |
+
```
|
| 37 |
+
now, the structure looks like:
|
| 38 |
+
π CVRPDataset/
|
| 39 |
+
ββπ img_dir/
|
| 40 |
+
ββπ ann_dir/
|
| 41 |
+
|
| 42 |
+
3. Split the training set and test set.
|
| 43 |
+
```bash
|
| 44 |
+
python run/split_dataset.py
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
You can download our training set and test set [here](http://61.155.111.202:18080/cvrp/dataset).
|
| 49 |
+
|
| 50 |
+
***Dataset Configs***
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
cd mmseg/datasets
|
| 54 |
+
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP.py
|
| 55 |
+
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/__init__.py
|
| 56 |
+
cd ../../configs/_base_/datasets
|
| 57 |
+
wget https://huggingface.co/CVRPDataset/Model/resolve/main/dataset_configs/CVRP_pipeline.py
|
| 58 |
+
```
|
| 59 |
+
If you want to register your own dataset,
|
| 60 |
+
1. Import and register datasets in `mmseg/datasets/__init__.py`
|
| 61 |
+
```python
|
| 62 |
+
from .CVRP import CVRPDataset
|
| 63 |
+
```
|
| 64 |
+
Add,
|
| 65 |
+
```python
|
| 66 |
+
# other datasets
|
| 67 |
+
__all__ = ['CVRPDataset']
|
| 68 |
+
```
|
| 69 |
+
2. Register dataset class in `mmseg/datasets/CVRP.py'
|
| 70 |
+
```python
|
| 71 |
+
class CVRPDataset(BaseSegDataset):
|
| 72 |
+
METAINFO = {
|
| 73 |
+
'classes':['background','panicle'],
|
| 74 |
+
'palette':[[127,127,127],[200,0,0]]
|
| 75 |
+
}
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
3. Modify pipeline of data process in `config/_base_/CVRP_pipeline.py`
|
| 79 |
+
```python
|
| 80 |
+
dataset_type = 'CVRPDataset'
|
| 81 |
+
data_root = 'CVRPDataset/'
|
| 82 |
+
```
|
| 83 |
+
you'll need to specify the paths for the train and evalution data directories.
|
| 84 |
+
```python
|
| 85 |
+
# train_dataloader:
|
| 86 |
+
data_prefix=dict(img_path='img_dir/train', seg_map_path='ann_dir/train')
|
| 87 |
+
# val_dataloader:
|
| 88 |
+
data_prefix=dict(img_path='img_dir/val', seg_map_path='ann_dir/val')
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
***Model Configs***
|
| 94 |
+
You can generate model config files using *run_configs.py*
|
| 95 |
+
|
| 96 |
+
```bash
|
| 97 |
+
mkdir 'work_dirs' 'CVRP_configs' 'outputs'
|
| 98 |
+
python ../run/run_configs.py --model_name deeplabv3plus -m configs/deeplabv3plus/deeplabv3plus_r101-d8_4xb4-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
|
| 99 |
+
|
| 100 |
+
python ../run/run_configs.py --model_name knet -m configs/knet/knet-s3_swin-l_upernet_8xb2-adamw-80k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
|
| 101 |
+
|
| 102 |
+
python ../run/run_configs.py --model_name mask2former -m configs/mask2former/mask2former_swin-l-in22k-384x384-pre_8xb2-160k_ade20k-640x640.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
|
| 103 |
+
|
| 104 |
+
python ../run/run_configs.py --model_name segformer -m configs/segformer/segformer_mit-b5_8xb2-160k_ade20k-512x512.py -d configs/_base_/datasets/CVRP_pipeline.py -c 2 -w work_dirs -s CVRP_configs
|
| 105 |
+
```
|
| 106 |
+
Also, you can download model config files [here](https://huggingface.co/CVRPDataset/Model/tree/main/model_configs).
|
| 107 |
+
```bash
|
| 108 |
+
cd CVRP_configs
|
| 109 |
+
wget https://huggingface.co/CVRPDataset/Model/resolve/main/model_configs/CVRP_mask2former.py
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
***Train***
|
| 113 |
+
```bash
|
| 114 |
+
python tools/train.py CVRP_configs/CVRP_mask2former.py
|
| 115 |
+
```
|
| 116 |
+
Also, you can download checkpoint [here](https://huggingface.co/CVRPDataset/Model/tree/main/checkpoint).
|
| 117 |
+
|
| 118 |
+
***Test***
|
| 119 |
+
```bash
|
| 120 |
+
python ../run/test.py -d CVRPDataset/val -m CVRP_configs/CVRP_mask2former.py -pth work_dirs/CVRP_mask2former/Mask2Former.pth -o outputs/CVRP_mask2former
|
| 121 |
+
```
|
| 122 |
+
### UI
|
| 123 |
+
##### We create a web user interface for annotation based on gradio:
|
| 124 |
+
```bash
|
| 125 |
+
python run/app.py
|
| 126 |
+
```
|
| 127 |
+
##### The UI :
|
| 128 |
+
1.Users can upload an image or use a sample image at β .Then, they can select one of four models at β‘. We recommend **Mask2Former**. After that, click *Run*.
|
| 129 |
+
2.We provide two forms of segmentation results for download at β’.
|
| 130 |
+

|
| 131 |
+
|
| 132 |
+
### Labelme
|
| 133 |
+
##### If you need to manually adjust the annotation, you can use labelme.
|
| 134 |
+
```bash
|
| 135 |
+
python run/mask2json.py
|
| 136 |
+
pip install labelme==3.16.7
|
| 137 |
+
labelme
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
```bash
|
| 141 |
+
python run/json2png.py
|
| 142 |
+
```
|
| 143 |
+
### Citation
|
| 144 |
+
Please considering cite our paper if you find this work useful!
|
| 145 |
+
|
| 146 |
+
### Acknowledgements
|
| 147 |
+
We thank Mr.Zhitao Zhu, Dr. Weijie Tang, and Dr. Yunhui Zhang for their technical support.
|