Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -45,6 +45,8 @@ pretty_name: CHIP
|
|
| 45 |
</div>
|
| 46 |
</div>
|
| 47 |
|
|
|
|
|
|
|
| 48 |
## Introduction
|
| 49 |
|
| 50 |
Accurate 6D pose estimation of complex objects in 3D environments is essential for effective robotic manipulation. Yet, existing benchmarks fall short in evaluating 6D pose estimation methods under realistic industrial conditions, as most datasets focus on household objects in domestic settings, while the few available industrial datasets are limited to artificial setups with objects placed on tables. To bridge this gap, we introduce CHIP, the first dataset designed for 6D pose estimation of chairs manipulated by a robotic arm in a real-world industrial environment. CHIP includes seven distinct chairs captured using three different RGBD sensing technologies and presents unique challenges, such as distractor objects with fine-grained differences and severe occlusions caused by the robotic arm and human operators. CHIP comprises 77,811 RGBD images annotated with ground-truth 6D poses automatically derived from the robot's kinematics, averaging 11,115 annotations per chair. We benchmark CHIP using three zero-shot 6D pose estimation methods, assessing performance across different sensor types, localization priors, and occlusion levels. Results show substantial room for improvement, highlighting the unique challenges posed by the dataset.
|
|
@@ -60,24 +62,35 @@ Accurate 6D pose estimation of complex objects in 3D environments is essential f
|
|
| 60 |
### Object Classes
|
| 61 |
The dataset contains 7 distinct chair models, each represented by a unique object class. The chairs exhibit a variety of designs and structures, providing a diverse set of challenges for 6D pose estimation algorithms. The CHIP dataset features seven chair models: three solid-wood and four frameonly designs, originally including cushions.
|
| 62 |
|
| 63 |
-
|
| 64 |
-
- 000001: si0325 [Andreu World link](https://andreuworld.com/en/products/smile-si0325)
|
| 65 |
-
|
| 66 |
-
|
|
|
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
|
|
|
| 69 |
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
-
|
| 73 |
-
-
|
| 74 |
-
-
|
| 75 |
-
-
|
| 76 |
-
-
|
| 77 |
-
|
| 78 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
-
|
| 81 |
If you find CHIP useful for your work please cite:
|
| 82 |
```
|
| 83 |
@inproceedings{nardon2025chip,
|
|
@@ -86,3 +99,63 @@ If you find CHIP useful for your work please cite:
|
|
| 86 |
booktitle={British Machine Vision Conference (BMVC)},
|
| 87 |
year={2025}}
|
| 88 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
</div>
|
| 46 |
</div>
|
| 47 |
|
| 48 |
+

|
| 49 |
+
|
| 50 |
## Introduction
|
| 51 |
|
| 52 |
Accurate 6D pose estimation of complex objects in 3D environments is essential for effective robotic manipulation. Yet, existing benchmarks fall short in evaluating 6D pose estimation methods under realistic industrial conditions, as most datasets focus on household objects in domestic settings, while the few available industrial datasets are limited to artificial setups with objects placed on tables. To bridge this gap, we introduce CHIP, the first dataset designed for 6D pose estimation of chairs manipulated by a robotic arm in a real-world industrial environment. CHIP includes seven distinct chairs captured using three different RGBD sensing technologies and presents unique challenges, such as distractor objects with fine-grained differences and severe occlusions caused by the robotic arm and human operators. CHIP comprises 77,811 RGBD images annotated with ground-truth 6D poses automatically derived from the robot's kinematics, averaging 11,115 annotations per chair. We benchmark CHIP using three zero-shot 6D pose estimation methods, assessing performance across different sensor types, localization priors, and occlusion levels. Results show substantial room for improvement, highlighting the unique challenges posed by the dataset.
|
|
|
|
| 62 |
### Object Classes
|
| 63 |
The dataset contains 7 distinct chair models, each represented by a unique object class. The chairs exhibit a variety of designs and structures, providing a diverse set of challenges for 6D pose estimation algorithms. The CHIP dataset features seven chair models: three solid-wood and four frameonly designs, originally including cushions.
|
| 64 |
|
| 65 |
+
#### Frameonly designs:
|
| 66 |
+
- 000001: Smile si0325 [Andreu World link](https://andreuworld.com/en/products/smile-si0325)
|
| 67 |
+
- 000003: Carlotta si0991 [Andreu World link](https://andreuworld.com/en/products/carlotta-si0991)
|
| 68 |
+
- 000006: Carola so0903 [Andreu World link](https://andreuworld.com/en/products/carola-so0903)
|
| 69 |
+
- 000007: Rizo so2043 [Andreu World link](https://andreuworld.com/en/products/rizo-so2043)
|
| 70 |
|
| 71 |
+
#### Solid-wood designs:
|
| 72 |
+
- 000002: Happy si0374 [Andreu World link](https://andreuworld.com/en/products/happy-si0374)
|
| 73 |
+
- 000004: Duos si2750 [Andreu World link](https://andreuworld.com/en/products/duos-si2750)
|
| 74 |
+
- 000005: Rdl si7291 [Andreu World link](https://andreuworld.com/en/products/rdl-si7291)
|
| 75 |
|
| 76 |
+

|
| 77 |
|
| 78 |
+
### Data Fields
|
| 79 |
+
```
|
| 80 |
+
- **scene_id:** Unique identifier for each scene in the dataset (BOP format).
|
| 81 |
+
- **image_id:** Unique identifier for each image within a scene and camera type (BOP format).
|
| 82 |
+
- **camera_type:** Type of camera used to capture the image (e.g., 'zed', 'rs_l515', 'rs_d435').
|
| 83 |
+
- **image:** RGB image captured by the specified camera.
|
| 84 |
+
- **depth:** Depth image corresponding to the RGB image, captured by the specified camera.
|
| 85 |
+
- **width:** Width of the image in pixels.
|
| 86 |
+
- **height:** Height of the image in pixels.
|
| 87 |
+
- **split:** Dataset split to which the image belongs (e.g., 'test_no_occlusions', 'test_moderate_occlusions').
|
| 88 |
+
- **source_image_id:** Original image identifier from the CHIP dataset.
|
| 89 |
+
- **labels:** JSON string containing object annotations, including 6D poses and visibility information.
|
| 90 |
+
- **camera_params:** JSON string containing intrinsic and extrinsic camera parameters for the specified camera.
|
| 91 |
+
```
|
| 92 |
|
| 93 |
+
## Citation
|
| 94 |
If you find CHIP useful for your work please cite:
|
| 95 |
```
|
| 96 |
@inproceedings{nardon2025chip,
|
|
|
|
| 99 |
booktitle={British Machine Vision Conference (BMVC)},
|
| 100 |
year={2025}}
|
| 101 |
```
|
| 102 |
+
|
| 103 |
+
## Acknowledgement
|
| 104 |
+
<style>
|
| 105 |
+
.list_view{
|
| 106 |
+
display:flex;
|
| 107 |
+
align-items:center;
|
| 108 |
+
}
|
| 109 |
+
.list_view p{
|
| 110 |
+
padding:10px;
|
| 111 |
+
}
|
| 112 |
+
</style>
|
| 113 |
+
|
| 114 |
+
<div class="list_view">
|
| 115 |
+
<a href="https://aiprism.eu/" target="_blank">
|
| 116 |
+
<img src="resources/Ai-Prism_Logo_Square.png" alt="Ai-Prism logo" style="max-width:200px">
|
| 117 |
+
</a>
|
| 118 |
+
<p>
|
| 119 |
+
This work was supported by the European Union's Horizon Europe research and innovation programme under grant agreement No. 101058589 (AI-PRISM).
|
| 120 |
+
</p>
|
| 121 |
+
</div>
|
| 122 |
+
|
| 123 |
+
### Partners
|
| 124 |
+
<style>
|
| 125 |
+
.partners_view{
|
| 126 |
+
display:flex;
|
| 127 |
+
align-items:center;
|
| 128 |
+
gap:20px;
|
| 129 |
+
}
|
| 130 |
+
.partners_view p{
|
| 131 |
+
padding:10px;
|
| 132 |
+
}
|
| 133 |
+
</style>
|
| 134 |
+
|
| 135 |
+
<div class="partners_view">
|
| 136 |
+
<a href="https://www.fbk.eu/" target="_blank">
|
| 137 |
+
<img src="resources/logo_fbk.png" alt="FBK logo" style="max-width:150px">
|
| 138 |
+
</a>
|
| 139 |
+
<a href="https://www.andreuworld.com/en/" target="_blank">
|
| 140 |
+
<img src="resources/Logo_Andreu_World.png" alt="Andreu World logo" style="max-width:150px">
|
| 141 |
+
</a>
|
| 142 |
+
<a href="https://www.ikerlan.es/en" target="_blank">
|
| 143 |
+
<img src="resources/Ikerlan_BRTA_V.png" alt="Ikerlan logo" style="max-width:150px">
|
| 144 |
+
</a>
|
| 145 |
+
</div>
|
| 146 |
+
|
| 147 |
+
## Uses
|
| 148 |
+
|
| 149 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
| 150 |
+
|
| 151 |
+
## More Information [optional]
|
| 152 |
+
|
| 153 |
+
[More Information Needed]
|
| 154 |
+
|
| 155 |
+
## Dataset Card Authors [optional]
|
| 156 |
+
|
| 157 |
+
[More Information Needed]
|
| 158 |
+
|
| 159 |
+
## Dataset Card Contact
|
| 160 |
+
|
| 161 |
+
[More Information Needed]
|