|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
task_categories: |
|
|
- image-segmentation |
|
|
- image-classification |
|
|
pretty_name: RaspGrade |
|
|
dataset_info: |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: labels |
|
|
sequence: |
|
|
sequence: float64 |
|
|
- name: image_id |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 208837995 |
|
|
num_examples: 160 |
|
|
- name: valid |
|
|
num_bytes: 52068619 |
|
|
num_examples: 40 |
|
|
download_size: 242513653 |
|
|
dataset_size: 260906614 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: valid |
|
|
path: data/valid-* |
|
|
tags: |
|
|
- food |
|
|
- foodquality |
|
|
--- |
|
|
|
|
|
--- |
|
|
# 🍓 The RaspGrade Dataset: Towards Automatic Raspberry Ripeness Grading with Deep Learning |
|
|
This research investigates the application of computer vision for rapid, accurate, and non-invasive food quality assessment, focusing on the novel challenge of real-time raspberry grading into five distinct classes within an industrial environment as the fruits move along a conveyor belt. |
|
|
To address this, a dedicated dataset of raspberries, namely RaspGrade, was acquired and meticulously annotated. |
|
|
|
|
|
Instance segmentation experiments revealed that accurate fruit-level masks can be obtained; however, the classification of certain raspberry grades presents challenges due to color similarities and occlusion, while others are more readily distinguishable based on color. |
|
|
|
|
|
🤗 [Paper on Hugging Face](https://huggingface.co/papers/2505.08537) | 📝 [Paper on ArXiv](https://arxiv.org/abs/2505.08537) |
|
|
|
|
|
## 🗂️ Data Instances |
|
|
<figure style="display:flex; gap:10px; flex-wrap:wrap; justify-content:center;"> |
|
|
<img src="1.png" width="45%" alt="Raspberry Example 1"> |
|
|
<img src="3.png" width="45%" alt="Raspberry Example 2"> |
|
|
</figure> |
|
|
|
|
|
## 🏷️ Annotation Format |
|
|
Note that the annotations follow the YOLO instance segmentation format. |
|
|
|
|
|
Please refer to [this page](https://docs.ultralytics.com/datasets/segment/) for more info. |
|
|
|
|
|
## 🧪 How to read and display examples |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from PIL import Image, ImageDraw |
|
|
import numpy as np |
|
|
import random |
|
|
|
|
|
# --- Configuration --- |
|
|
DATASET_NAME = "FBK-TeV/RaspGrade" |
|
|
SAMPLE_INDEX = 0 # Index of the sample to visualize from the 'valid' split |
|
|
OUTPUT_IMAGE = 'annotated_hub_image_fixed_colors.png' |
|
|
ALPHA = 128 # Transparency level for masks |
|
|
|
|
|
# Define a color map for different classes |
|
|
CLASS_COLORS = { |
|
|
0: (255, 0, 0, ALPHA), # Red |
|
|
1: (0, 255, 0, ALPHA), # Green |
|
|
2: (0, 0, 255, ALPHA), # Blue |
|
|
3: (255, 255, 0, ALPHA), # Yellow |
|
|
4: (255, 0, 255, ALPHA) # Magenta |
|
|
# Add more colors for other class IDs if needed |
|
|
} |
|
|
|
|
|
def convert_normalized_polygon_to_pixels(polygon_normalized, width, height): |
|
|
polygon_pixels = (np.array(polygon_normalized).reshape(-1, 2) * np.array([width, height])).astype(int).flatten().tolist() |
|
|
return polygon_pixels |
|
|
|
|
|
if __name__ == "__main__": |
|
|
try: |
|
|
dataset = load_dataset(DATASET_NAME) |
|
|
if 'valid' not in dataset: |
|
|
raise ValueError(f"Split 'valid' not found in dataset '{DATASET_NAME}'") |
|
|
valid_dataset = dataset['valid'] |
|
|
if SAMPLE_INDEX >= len(valid_dataset): |
|
|
raise ValueError(f"Sample index {SAMPLE_INDEX} is out of bounds for the 'valid' split (size: {len(valid_dataset)})") |
|
|
|
|
|
sample = valid_dataset[SAMPLE_INDEX] |
|
|
original_image = sample['image'].convert("RGBA") |
|
|
width, height = original_image.size |
|
|
|
|
|
mask = Image.new('RGBA', (width, height), (0, 0, 0, 0)) |
|
|
mask_draw = ImageDraw.Draw(mask, 'RGBA') |
|
|
|
|
|
labels = sample['labels'] |
|
|
if isinstance(labels, list): |
|
|
for annotation in labels: |
|
|
if len(annotation) > 4: # Assuming YOLO format includes class and polygon |
|
|
class_id = int(annotation[0]) |
|
|
polygon_normalized = np.array(annotation[1:]).astype(float).reshape(-1, 2).flatten().tolist() |
|
|
polygon_pixels = convert_normalized_polygon_to_pixels(polygon_normalized, width, height) |
|
|
color = CLASS_COLORS.get(class_id, (255, 255, 255, ALPHA)) # Default to white if class_id not in map |
|
|
mask_draw.polygon(polygon_pixels, fill=color) |
|
|
|
|
|
annotated_image = Image.alpha_composite(original_image, mask) |
|
|
|
|
|
annotated_image.save(OUTPUT_IMAGE) |
|
|
print(f"Annotated image with fixed colors saved as {OUTPUT_IMAGE}") |
|
|
annotated_image.show() |
|
|
|
|
|
except Exception as e: |
|
|
print(f"An error occurred: {e}") |
|
|
``` |
|
|
|
|
|
## 🙏 Acknowledgement |
|
|
<style> |
|
|
.list_view{ |
|
|
display:flex; |
|
|
align-items:center; |
|
|
} |
|
|
.list_view p{ |
|
|
padding:10px; |
|
|
} |
|
|
</style> |
|
|
<div class="list_view"> |
|
|
<a href="https://agilehand.eu/" target="_blank"> |
|
|
<img src="AGILEHAND.png" alt="AGILEHAND logo" style="max-width:200px"> |
|
|
</a> |
|
|
<p style="line-height: 1.6;"> |
|
|
This work is supported by European Union’s Horizon Europe research and innovation programme under grant agreement No 101092043, project AGILEHAND (Smart Grading, Handling and Packaging Solutions for Soft and Deformable Products in Agile and Reconfigurable Lines. |
|
|
</p> |
|
|
</div> |
|
|
|
|
|
## 🤝 Partners |
|
|
<div style="display: flex; flex-wrap: wrap; justify-content: center; gap: 40px; align-items: center;"> |
|
|
<a href="https://www.fbk.eu/en" target="_blank"><img src="FBK.jpg" width="180" alt="FBK logo"></a> |
|
|
<a href="https://www.santorsola.com/" target="_blank"><img src="Santorsola.jpeg" width="250" alt="Santorsola logo"></a> |
|
|
</div> |
|
|
|
|
|
|
|
|
## 📖 Citation |
|
|
```bibtex |
|
|
@article{mekhalfi2025raspgrade, |
|
|
title={The RaspGrade Dataset: Towards Automatic Raspberry Ripeness Grading with Deep Learning}, |
|
|
author={Mekhalfi, Mohamed Lamine and Chippendale, Paul and Poiesi, Fabio and Bonecher, Samuele and Osler, Gilberto and Zancanella, Nicola}, |
|
|
journal={arXiv preprint arXiv:2505.08537}, |
|
|
year={2025} |
|
|
} |
|
|
``` |