Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
License:
File size: 9,107 Bytes
50366da
 
 
 
 
 
1cf7388
 
 
 
 
 
 
 
 
73b9fb7
1cf7388
73b9fb7
1cf7388
 
 
 
 
 
 
 
 
 
c9ae51f
 
5ebdbf8
d10dedc
73b9fb7
 
c9ae51f
6190ef7
708f768
c9ae51f
708f768
299bc49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
708f768
 
5bbf115
 
708f768
 
 
 
 
 
 
 
 
 
 
 
a117d6b
708f768
5bbf115
299bc49
 
 
 
708f768
5bbf115
299bc49
 
 
708f768
2ddc851
708f768
5bbf115
 
2ddc851
 
 
 
 
 
 
 
 
 
 
5bbf115
708f768
1342a5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a117d6b
 
 
 
 
 
 
 
 
5bbf115
708f768
 
 
 
 
 
 
 
5bbf115
 
 
 
 
 
 
 
 
 
 
 
 
 
a117d6b
5bbf115
 
 
 
 
 
 
6283284
299bc49
 
6283284
299bc49
 
6283284
299bc49
 
6283284
299bc49
 
2ddc851
3ae7593
 
 
c503480
ea10ecd
c503480
ea10ecd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
---
license: cc-by-nc-sa-4.0
tags:
- 6d-pose-estimation
size_categories:
- 10K<n<100K
dataset_info:
  features:
  - name: scene_id
    dtype: string
  - name: image_id
    dtype: string
  - name: camera_type
    dtype: string
  - name: image
    dtype: image
  - name: depth
    dtype: image
  - name: width
    dtype: int64
  - name: height
    dtype: int64
  - name: split
    dtype: string
  - name: labels
    dtype: string
  - name: camera_params
    dtype: string
  - name: source_image_id
    dtype: string
configs:
- config_name: default
  data_files:
  - split: test
    path: data/*
pretty_name: CHIP
---
---
# CHIP: A multi-sensor dataset for 6D pose estimation of chairs in industrial settings
<div style="display: flex; justify-content: center; gap: 1rem; flex-wrap: wrap; margin: 2rem 0;">

  <a href="https://tev-fbk.github.io/CHIP/"
     target="_blank"
     style="
       display: inline-block;
       background-image: linear-gradient(to right, #3b82f6, #06b6d4);
       color: white;
       font-weight: bold;
       font-size: 1rem;
       padding: 0.85rem 1.75rem;
       border-radius: 0.75rem;
       text-decoration: none;
       transition: opacity 0.3s;
     "
     onmouseover="this.style.opacity='0.85'"
     onmouseout="this.style.opacity='1'">
    🏠 Homepage
  </a>

  <a href="https://arxiv.org/abs/2506.09699"
     target="_blank"
     style="
       display: inline-block;
       background-image: linear-gradient(to right, #34d399, #10b981);
       color: white;
       font-weight: bold;
       font-size: 1rem;
       padding: 0.85rem 1.75rem;
       border-radius: 0.75rem;
       text-decoration: none;
       transition: opacity 0.3s;
     "
     onmouseover="this.style.opacity='0.85'"
     onmouseout="this.style.opacity='1'">
    📄 Paper
  </a>

</div>

![CHIP Dataset Teaser](resources/teaser.png)

## Introduction

Accurate 6D pose estimation of complex objects in 3D environments is essential for effective robotic manipulation. Yet, existing benchmarks fall short in evaluating 6D pose estimation methods under realistic industrial conditions, as most datasets focus on household objects in domestic settings, while the few available industrial datasets are limited to artificial setups with objects placed on tables. To bridge this gap, we introduce CHIP, the first dataset designed for 6D pose estimation of chairs manipulated by a robotic arm in a real-world industrial environment. CHIP includes seven distinct chairs captured using three different RGBD sensing technologies and presents unique challenges, such as distractor objects with fine-grained differences and severe occlusions caused by the robotic arm and human operators. CHIP comprises 77,811 RGBD images annotated with ground-truth 6D poses automatically derived from the robot's kinematics, averaging 11,115 annotations per chair. We benchmark CHIP using three zero-shot 6D pose estimation methods, assessing performance across different sensor types, localization priors, and occlusion levels. Results show substantial room for improvement, highlighting the unique challenges posed by the dataset.

## Dataset Summary
- **Number of images:** 77,811 RGBD images
- **Number of object classes:** 7 distinct chair models
- **Sensors used:** Intel RealSense D435, Intel RealSense L515, Stereo Labs ZED
- **Annotations:** Ground-truth 6D poses derived from robot kinematics (~11,115 annotations per chair)
- **Occlusion levels:** No occlusions, moderate occlusions

### Object Classes
CHIP includes seven high-quality chair models from Andreu World, covering a variety of structures and materials.

#### Frameonly designs:
- **000001**: Smile si0325 — [Andreu World link](https://andreuworld.com/en/products/smile-si0325)
- **000003**: Carlotta si0991 — [Andreu World link](https://andreuworld.com/en/products/carlotta-si0991)
- **000006**: Carola so0903 — [Andreu World link](https://andreuworld.com/en/products/carola-so0903)
- **000007**: Rizo so2043 — [Andreu World link](https://andreuworld.com/en/products/rizo-so2043)

#### Solid-wood designs:
- **000002**: Happy si0374 — [Andreu World link](https://andreuworld.com/en/products/happy-si0374)
- **000004**: Duos si2750 — [Andreu World link](https://andreuworld.com/en/products/duos-si2750)
- **000005**: Rdl si7291 — [Andreu World link](https://andreuworld.com/en/products/rdl-si7291)

![Example Chair Models](resources/chairs.jpg)

### Data Fields
```
- scene_id: Unique identifier for each scene in the dataset (BOP format).
- image_id: Unique identifier for each image within a scene and camera type (BOP format).
- camera_type: Type of camera used to capture the image (e.g., 'zed', 'rs_l515', 'rs_d435').
- image: RGB image captured by the specified camera.
- depth: Depth image corresponding to the RGB image, captured by the specified camera.
- width: Width of the image in pixels.
- height: Height of the image in pixels.
- split: Dataset split to which the image belongs (e.g., 'test_no_occlusions', 'test_moderate_occlusions').
- source_image_id: Original image identifier from the CHIP dataset.
- labels: JSON string containing object annotations, including 6D poses and visibility information.
- camera_params: JSON string containing intrinsic and extrinsic camera parameters for the specified camera.
```

## Example of Usage

To get started quickly, you can run the example script included in this repository:

👉 **[`example_usage.py`](./example_usage.py)**

This script shows how to load CHIP samples from the dataset, automatically download the required models, and reconstruct a 3D point cloud from the RGB-D image. It then overlays the lifted point cloud with the corresponding object mesh using the ground-truth 6D pose, allowing you to visually inspect the alignment in 3D.

This minimal example demonstrates how to:

- Load RGB-D data and camera parameters  
- Lift depth into a point cloud  
- Load the object mesh for a given chair  
- Apply the ground-truth 6D pose  
- Visualize everything together in 3D  

Run the script with:

```bash
python example_usage.py

```
and a 3D viewer will open, showing the reconstructed scene and the posed chair model.

![Example](resources/example_usage.png)

### Uses

The CHIP dataset can be used for a wide range of research tasks, including:

- Benchmarking 6D pose estimation in realistic industrial environments
- Evaluating robustness to challenging conditions, such as occlusions, clutter, and visually similar distractor objects
- Studying cross-sensor generalization across LiDAR-based, passive-stereo, and active-stereo RGB-D sensors
- 6D pose tracking during robotic manipulation, including motion sequences with dynamic occlusions

## Citation
If you find CHIP useful for your work please cite:
```
@inproceedings{nardon2025chip,
  title={CHIP: A multi-sensor dataset for 6D pose estimation of chairs in industrial settings},
  author={Nardon, Mattia and Mujika Agirre, Mikel and González Tomé, Ander and Sedano Algarabel, Daniel and Rueda Collell, Josep and Caro, Ana Paola and Caraffa, Andrea and Poiesi, Fabio and Chippendale, Paul Ian and Boscaini, Davide},
  booktitle={British Machine Vision Conference (BMVC)},
  year={2025}}
```

## Acknowledgement
<style>
  .list_view{
      display:flex;
      align-items:center;
  }
  .list_view p{
      padding:10px;
  }
</style>

<div class="list_view">
  <a href="https://aiprism.eu/" target="_blank">
    <img src="resources/logos/Ai-Prism_Logo_Square.png" alt="Ai-Prism logo" style="max-width:200px">
  </a>
  <p>
    This work was supported by the European Union's Horizon Europe research and innovation programme under grant agreement No. 101058589 (AI-PRISM).
  </p>
</div>

### Partners
<div style="background-color: #f8fafc; border: 1px solid #e2e8f0; border-radius: 1rem; padding: 2rem; max-width: 672px; margin: 2rem auto; text-align: center; font-family: sans-serif; color: #334155;">
  <div style="display: flex; justify-content: center; align-items: center; gap: 30px; flex-wrap: wrap; margin-top: 1.5rem;">
    <a href="https://www.fbk.eu/" target="_blank" style="opacity: 0.9; transition: opacity 0.3s;" onmouseover="this.style.opacity='1'" onmouseout="this.style.opacity='0.9'">
      <img src="resources/logos/logo_fbk.png" alt="FBK logo" style="max-width:120px;">
    </a>
    <a href="https://www.andreuworld.com/en/" target="_blank" style="opacity: 0.9; transition: opacity 0.3s;" onmouseover="this.style.opacity='1'" onmouseout="this.style.opacity='0.9'">
      <img src="resources/logos/Logo_Andreu_World.png" alt="Andreu World logo" style="max-width:120px;">
    </a>
    <a href="https://www.ikerlan.es/en" target="_blank" style="opacity: 0.9; transition: opacity 0.3s;" onmouseover="this.style.opacity='1'" onmouseout="this.style.opacity='0.9'">
      <img src="resources/logos/Ikerlan_BRTA_V.png" alt="Ikerlan logo" style="max-width:120px;">
    </a>
  </div>
</div>

### Contact
For any questions regarding the dataset, please contact Mattia Nardon (mattia.nardon@fbk.eu).

Homepage link: https://tev-fbk.github.io/CHIP/

Paper link: https://arxiv.org/abs/2506.09699