Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
SPovoli commited on
Commit
5a199b1
·
verified ·
1 Parent(s): 81a0811

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -4
README.md CHANGED
@@ -37,18 +37,79 @@ language:
37
  - Paper: https://arxiv.org/abs/2503.13082
38
 
39
  ## Dataset Description
 
 
40
 
 
 
 
41
 
42
- ## Introduction
 
 
 
43
 
 
44
 
 
 
 
45
 
46
- ## How to use: an example of visualization
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Data Fields
50
  ```
51
- index: An integer representing the unique identifier for each example.
 
 
 
 
 
 
 
 
52
 
53
  ```
54
 
@@ -68,7 +129,46 @@ Jiao, R., Fasoli, A., Giuliari, F., Bortolon, M., Povoli, S., Mei, G., ... & Poi
68
  }
69
  ```
70
  ## Acknowledgement
71
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
 
74
  ## Partners
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  - Paper: https://arxiv.org/abs/2503.13082
38
 
39
  ## Dataset Description
40
+ <img src="dataset-1.png" alt="Teaser" width="500">
41
+ Examples of FreeGraspData at different task difficulties with three user-provided instructions. Star indicates the target object, and green circle indicates the ground-truth objects to pick.
42
 
43
+ We introduce the ree-from language grasping dataset (FreeGraspDat), a novel dataset built upon MetaGraspNetv2 (1) to evaluate the robotic grasping task with free-form language instructions.
44
+ MetaGraspNetv2 is a large-scale simulated dataset featuring challenging aspects of robot vision in the bin-picking setting, including multi-view RGB-D images and metadata, eg object categories, amodal segmentation masks, and occlusion graphs indicating occlusion relationships between objects from each viewpoint.
45
+ To build FreeGraspDat, we selected scenes containing at least four objects to ensure sufficient scene clutter.
46
 
47
+ FreeGraspDat extends MetaGraspNetV2 in three aspects:
48
+ - i) we derive the ground-truth grasp sequence until reaching the target object from the occlusion graphs,
49
+ - ii) we categorize the task difficulty based on the obstruction level and instance ambiguity
50
+ - iii) we provide free-form language instructions, collected from human annotators.
51
 
52
+ **Ground-truth grasp sequence**
53
 
54
+ We obtain the ground-truth grasp sequence based on the object occlusion graphs provided in MetaGraspNetV2.
55
+ As the visual occlusion does not necessarily indicate obstruction, we thus first prune the edges in the provided occlusion graph that are less likely to form obstruction. Following the heuristic that less occlusion indicates less chance of obstruction, we remove the edges where the percentage of the occlusion area of the occluded object is below $1\%$.
56
+ From the node representing the target object, we can then traverse the pruned graph to locate the leaf node, that is the ground-truth object to grasp first. The sequence from the leaf node to the target node forms the correct sequence for the robotic grasping task.
57
 
58
+ **Grasp Difficulty Categorization**
59
 
60
+ We use the pruned occlusion graph to classify the grasping difficulty of target objects into three levels:
61
+
62
+ - **Easy**: Unobstructed target objects (leaf nodes in the pruned occlusion graph).
63
+ - **Medium**: Objects obstructed by at least one object (maximum hop distance to leaf nodes is 1).
64
+ - **Hard**: Objects obstructed by a chain of other objects (maximum hop distance to leaf nodes is more than 1).
65
+
66
+ Objects are labeled as **Ambiguous** if multiple instances of the same class exist in the scene.
67
+
68
+ This results in six robotic grasping difficulty categories:
69
+
70
+ - **Easy without Ambiguity**
71
+ - **Medium without Ambiguity**
72
+ - **Hard without Ambiguity**
73
+ - **Easy with Ambiguity**
74
+ - **Medium with Ambiguity**
75
+ - **Hard with Ambiguity**
76
+
77
+ **Free-form language user instructions**
78
+
79
+ For each of the six difficulty categories, we randomly select 50 objects, resulting in 300 robotic grasping scenarios.
80
+ For each scenario, we provide multiple users with a top-down image of the bin and a visual indicator highlighting the target object.
81
+ No additional context or information about the object is provided.
82
+ We instruct the user to provide an unambiguous natural language description of the indicated object with their best effort.
83
+ In total, ten users are involved in the data collection procedure, with a wide age span. %and a balanced gender distribution.
84
+ We randomly select three user instructions for each scenario, yielding a total of 900 evaluation scenarios.
85
+ This results in diverse language instructions.
86
+
87
+ <img src="ann_stats-1.png" alt="Teaser" width="500">
88
+
89
+ This figure illustrates the similarity distribution among the three user-defined instructions in Free-form language user instructions, based on GPT-4o's interpretability, semantic similarity, and sentence structure similarity.
90
+ To assess GPT-4o's interpretability, we introduce a novel metric, the GPT score, which measures GPT-4o's coherence in responses.
91
+ For each target, we provide \gpt with an image containing overlaid object IDs and ask it to identify the object specified by each of the three instructions.
92
+ The GPT score quantifies the fraction of correctly identified instructions, ranging from 0 (no correct identifications) to 1 (all three correct).
93
+ We evaluate semantic similarity using the embedding score, defined as the average SBERT (2) similarity across all pairs of user-defined instructions.
94
+ We assess structural similarity using the Rouge-L score, computed as the average Rouge-L (3) score across all instruction pairs.
95
+ Results indicate that instructions referring to the same target vary significantly in sentence structure (low Rouge-L score), reflecting differences in word choice and composition, while showing moderate variation in semantics (medium embedding score).
96
+ Interestingly, despite these variations, the consistently high GPT scores across all task difficulty levels suggest that GPT-4o is robust in identifying the correct target in the image, regardless of differences in instruction phrasing.
97
+
98
+ (1) Gilles, Maximilian, et al. "Metagraspnetv2: All-in-one dataset enabling fast and reliable robotic bin picking via object relationship reasoning and dexterous grasping." IEEE Transactions on Automation Science and Engineering 21.3 (2023): 2302-2320.
99
+ (2) Reimers, Nils, and Iryna Gurevych. "Sentence-bert: Sentence embeddings using siamese bert-networks." arXiv preprint arXiv:1908.10084 (2019).
100
+ (3) Lin, Chin-Yew, and Franz Josef Och. "Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics." Proceedings of the 42nd annual meeting of the association for computational linguistics (ACL-04). 2004.(3)
101
 
102
  ## Data Fields
103
  ```
104
+ - **__index_level_0__**: An integer representing the unique identifier for each example.
105
+ - **image**: The actual image sourced from the MetaGraspNetV2 dataset.
106
+ - **sceneId**: An integer identifier for the scene, directly taken from the MetaGrasp dataset. It corresponds to the specific scene in which the object appears.
107
+ - **queryObjId**: An integer identifier for the object being targeted, from the MetaGrasp dataset.
108
+ - **annotation**: A string containing the annotation details for the target object within the scene in a free-form language description.
109
+ - **groundTruthObjIds**: A string listing the object IDs that are considered the ground truth for the scene.
110
+ - **difficulty**: A string indicating the difficulty level of the grasp, reported in the introduction of the dataset. The difficulty levels are categorized based on the occlusion graph.
111
+ - **ambiguious**: A boolean indicating whether the object is ambiguous. An object is considered ambiguous if multiple instances of the same class are present in the scene.
112
+ - **split**: A string denoting the split (0, 1, or 2) corresponding to different annotations for the same image. This split indicates the partition of annotations, not the annotator itself.
113
 
114
  ```
115
 
 
129
  }
130
  ```
131
  ## Acknowledgement
132
+ <style>
133
+ .list_view{
134
+ display:flex;
135
+ align-items:center;
136
+ }
137
+ .list_view p{
138
+ padding:10px;
139
+ }
140
+ </style>
141
+
142
+ <div class="list_view">
143
+ <a href="https://tev-fbk.github.io/FreeGrasp/" target="_blank">
144
+ <img src="VRT.png" alt="VRT Logo" style="max-width:200px">
145
+ </a>
146
+ <p>
147
+ This project was supported by Fondazione VRT under the project Make Grasping Easy, PNRR ICSC National Research Centre for HPC, Big Data and Quantum Computing (CN00000013), and FAIR - Future AI Research (PE00000013), funded by NextGeneration EU.
148
+ </p>
149
+ </div>
150
 
151
 
152
  ## Partners
153
+ <style>
154
+ table {
155
+ width: 100%;
156
+ table-layout: fixed;
157
+ border-collapse: collapse;
158
+ }
159
+ th, td {
160
+ text-align: center;
161
+ padding: 10px;
162
+ vertical-align: middle;
163
+ }
164
+ </style>
165
+
166
+ <table>
167
+ <tbody>
168
+ <tr>
169
+ <td><a href="https://www.fbk.eu/en" target="_blank"><img src="fbk.png" alt="FBK" style="height: 100px;"></a></td>
170
+ <td><a href="https://www.unitn.it/en" target="_blank"><img src="unitn.png" alt="UNITN" style="height: 100px;"></a></td>
171
+ <td><a href="https://www.iit.it/" target="_blank"><img src="iit.png" alt="IIT" style="height: 100px;"></a></td>
172
+ </tr>
173
+ </tbody>
174
+ </table>