Datasets:
Tasks:
Image Classification
Modalities:
Image
Sub-tasks:
multi-class-image-classification
Languages:
Chinese
Size:
10K<n<100K
License:
| annotations_creators: | |
| - crowdsourced | |
| language: | |
| - zh | |
| language_creators: | |
| - found | |
| license: | |
| - apache-2.0 | |
| multilinguality: | |
| - monolingual | |
| pretty_name: "15\u79CD\u852C\u83DC\u6570\u636E\u96C6" | |
| size_categories: | |
| - 10K<n<100K | |
| source_datasets: | |
| - original | |
| tags: | |
| - "\u852C\u83DC" | |
| - "\u56FE\u50CF\u5206\u7C7B" | |
| task_categories: | |
| - image-classification | |
| task_ids: | |
| - multi-class-image-classification | |
| ## 蔬菜图像数据集 | |
| ### 背景 | |
| 最初的实验是用世界各地发现的15种常见蔬菜进行的。实验选择的蔬菜有:豆类、苦瓜、葫芦、茄子、西兰花、卷心菜、辣椒、胡萝卜、花椰菜、黄瓜、木瓜、土豆、南瓜、萝卜和番茄。共使用了来自15个类的21000张图像,其中每个类包含1400张尺寸为224×224、格式为*.jpg的图像。数据集中70%用于培训,15%用于验证,15%用于测试。 | |
| ### 目录 | |
| 此数据集包含三个文件夹: | |
| - train (15000 张图像) | |
| - test (3000 张图像) | |
| - validation (3000 张图像) | |
| ### 数据收集 | |
| 这个数据集中的图像是我们为一个项目从蔬菜农场和市场收集的。 | |
| ### 制作元数据文件 | |
| 运行下面`python`的代码,就可以在桌面生成三个csv格式的元数据文件、一个分类数据文件(需要放入到数据文件中) | |
| ```python | |
| #!/usr/bin/env python3 | |
| # -*- coding: utf-8 -*- | |
| """ | |
| 1.下载的数据文件 Vegetable Images.zip ,并解压到桌面 | |
| 2.然后执行 python generate.py 即可生成三个元数据文件和一个分类数据文件 | |
| """ | |
| import os | |
| from pathlib import Path | |
| category_dict = { | |
| 'Bean': '豆类', | |
| 'Bitter_Gourd': '苦瓜', | |
| 'Bottle_Gourd': '葫芦', | |
| 'Brinjal': '茄子', | |
| 'Broccoli': '西兰花', | |
| 'Cabbage': '卷心菜', | |
| 'Capsicum': '辣椒', | |
| 'Carrot': '胡萝卜', | |
| 'Cauliflower': '花椰菜', | |
| 'Cucumber': '黄瓜', | |
| 'Papaya': '木瓜', | |
| 'Potato': '土豆', | |
| 'Pumpkin': '南瓜', | |
| 'Radish': '萝卜', | |
| 'Tomato': '番茄', | |
| } | |
| base_path = Path.home().joinpath('desktop') | |
| data = '\n'.join((item for item in category_dict.values())) # 注意:利用了python 3.6之后字典插入有序的特性 | |
| base_path.joinpath('classname.txt').write_text(data, encoding='utf-8') | |
| def create(filename): | |
| csv_path = base_path.joinpath(f'{filename}.csv') | |
| with csv_path.open('wt', encoding='utf-8', newline='') as csv: | |
| csv.writelines([f'image,category{os.linesep}']) | |
| data_path = base_path.joinpath('Vegetable Images', filename) | |
| batch = 0 | |
| datas = [] | |
| keys = list(category_dict.keys()) | |
| for image_path in data_path.rglob('*.jpg'): | |
| batch += 1 | |
| part1 = str(image_path).removeprefix(str(base_path)).replace('\\', '/')[1:] | |
| part2 = keys.index(image_path.parents[0].name) | |
| datas.append(f'{part1},{part2}{os.linesep}') | |
| if batch > 100: | |
| csv.writelines(datas) | |
| datas.clear() | |
| if datas: | |
| csv.writelines(datas) | |
| return csv_path.stat().st_size | |
| if __name__ == '__main__': | |
| print(create('train')) | |
| print(create('test')) | |
| print(create('validation')) | |
| ``` | |
| ### 致谢 | |
| 非常感谢原始数据集提供方 [Vegetable Image Dataset](https://www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset)。 | |
| ### 克隆数据 | |
| ```bash | |
| git clone https://huggingface.co/datasets/cc92yy3344/vegetable.git | |
| ``` |