Improve dataset card: Add paper link, task categories, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +167 -1
README.md CHANGED
@@ -1,4 +1,170 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
3
  ---
4
- https://github.com/WangYuLin-SEU/HCCEPose
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - object-detection
5
+ - image-segmentation
6
+ - image-to-3d
7
+ tags:
8
+ - 6d-pose-estimation
9
+ - 2d-3d-correspondences
10
  ---
11
+
12
+ # HccePose (BF) Dataset
13
+
14
+ This repository contains the dataset for the paper [HccePose(BF): Predicting Front & Back Surfaces to Construct Ultra-Dense 2D-3D Correspondences for Pose Estimation](https://huggingface.co/papers/2510.10177).
15
+
16
+ Code: https://github.com/WangYuLin-SEU/HCCEPose
17
+
18
+ ## Introduction
19
+ HccePose represents the state-of-the-art method for 6D object pose estimation based on a single RGB image. It introduces a **Hierarchical Continuous Coordinate Encoding (HCCE)** scheme, which encodes the three coordinate components of object surface points into hierarchical continuous codes. Through this hierarchical encoding, the neural network can effectively learn the correspondence between 2D image features and 3D surface coordinates of the object.
20
+
21
+ In the pose estimation process, the network trained with HCCE predicts the 3D surface coordinates of the object from a single RGB image, which are then used in a **Perspective-n-Point (PnP)** algorithm to solve for the 6D pose. Unlike traditional methods that only learn the visible front surface of objects, **HccePose(BF)** additionally learns the 3D coordinates of the back surface, thereby constructing denser 2D–3D correspondences and significantly improving pose estimation accuracy.
22
+
23
+ It is noteworthy that **HccePose(BF)** not only achieves high-precision 6D pose estimation but also delivers state-of-the-art performance in 2D segmentation from a single RGB image. The continuous and hierarchical nature of HCCE enhances the network’s ability to learn accurate object masks, offering substantial advantages over existing methods.
24
+
25
+ <div align="center">
26
+ <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/fig2.jpg" width=100%>
27
+ </div>
28
+
29
+ ## Features
30
+ ### 🔹 Object Preprocessing
31
+ - Object renaming and centering
32
+ - Rotation symmetry calibration (8 symmetry types) based on [**KASAL**](https://github.com/WangYuLin-SEU/KASAL)
33
+ - Export to [**BOP format**](https://github.com/thodan/bop_toolkit)
34
+
35
+ ### 🔹 Training Data Preparation
36
+ - Synthetic data generation and rendering using [**BlenderProc**](https://github.com/DLR-RM/BlenderProc)
37
+
38
+ ### 🔹 2D Detection
39
+ - Label generation and model training using [**Ultralytics**](https://github.com/ultralytics)
40
+
41
+ ### 🔹 6D Pose Estimation
42
+ - Preparation of **front** and **back** surface 3D coordinate labels
43
+ - Distributed training (DDP) implementation of **HccePose**
44
+ - Testing and visualization via **Dataloader**
45
+ - **HccePose (YOLOv11)** inference and visualization on:
46
+ - Single RGB images
47
+ - RGB videos
48
+
49
+ ## Environment Setup
50
+
51
+ ```bash
52
+ apt-get update && apt-get install -y wget software-properties-common gnupg2 python3-pip
53
+
54
+ apt-get update && apt-get install -y libegl1-mesa-dev libgles2-mesa-dev libx11-dev libxext-dev libxrender-dev
55
+
56
+ python3 -m pip install --upgrade setuptools pip
57
+
58
+ pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118
59
+
60
+ apt-get update apt-get install pkg-config libglvnd0 libgl1 libglx0 libegl1 libgles2 libglvnd-dev libgl1-mesa-dev libegl1-mesa-dev libgles2-mesa-dev cmake curl ninja-build
61
+
62
+ pip install ultralytics==8.3.70 fvcore==0.1.5.post20221221 pybind11==2.12.0 trimesh==4.2.2 ninja==1.11.1.1 kornia==0.7.2 open3d==0.19.0 transformations==2024.6.1 numpy==1.26.4 opencv-python==4.9.0.80 opencv-contrib-python==4.9.0.80
63
+
64
+ pip install scipy kiwisolver matplotlib imageio pypng Cython PyOpenGL triangle glumpy Pillow vispy imgaug mathutils pyrender pytz tqdm tensorboard kasal-6d
65
+ ```
66
+
67
+ ## Quick Start (Sample Usage)
68
+
69
+ This project provides a simple **HccePose-based** application example for the **Bin-Picking** task.
70
+ To reduce reproduction difficulty, both the objects (3D printed with standard white PLA material) and the camera (Xiaomi smartphone) are easily accessible devices.
71
+
72
+ You can:
73
+ - Print the sample object multiple times
74
+ - Randomly place the printed objects
75
+ - Capture photos freely using your phone
76
+ - Directly perform **2D detection**, **2D segmentation**, and **6D pose estimation** using the pretrained weights provided in this project
77
+
78
+ ---
79
+
80
+ ### 📦 Example Files
81
+ > Please keep the folder hierarchy unchanged.
82
+
83
+ | Type | Resource Link |
84
+ |------|----------------|
85
+ | 🎨 Object 3D Models | [models](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/demo-bin-picking/models) |
86
+ | 📁 YOLOv11 Weights | [yolo11](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/demo-bin-picking/yolo11) |
87
+ | 📂 HccePose Weights | [HccePose](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/demo-bin-picking/HccePose) |
88
+ | 🖼️ Test Images | [test_imgs](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/test_imgs) |
89
+ | 🎥 Test Videos | [test_videos](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/test_videos) |
90
+
91
+ > ⚠️ Note:
92
+ Files beginning with `train_` are only required for training.
93
+ For this **Quick Start** section, only the above test files are needed.
94
+
95
+ ---
96
+
97
+ ### ⏳ Model and Loader
98
+ During testing, import the following modules:
99
+ - `HccePose.tester` → Integrated testing module covering **2D detection**, **segmentation**, and **6D pose estimation**.
100
+ - `HccePose.bop_loader` → BOP-format dataset loader for loading object models and training data.
101
+
102
+ ---
103
+
104
+ ### 📸 Example Test
105
+ The following image shows the experimental setup:
106
+ Several white 3D-printed objects are placed inside a bowl on a white table, then photographed with a mobile phone.
107
+
108
+ Example input image 👇
109
+ <div align="center">
110
+ <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/test_imgs/IMG_20251007_165718.jpg" width="40%">
111
+ </div>
112
+
113
+ Source image: [Example Link](https://github.com/WangYuLin-SEU/HCCEPose/blob/main/test_imgs/IMG_20251007_165718.jpg)
114
+
115
+ You can directly use the following script for **6D pose estimation** and visualization:
116
+
117
+ ```python
118
+ import cv2
119
+ import numpy as np
120
+ from HccePose.tester import Tester
121
+ from HccePose.bop_loader import bop_dataset
122
+ if __name__ == '__main__':
123
+ dataset_path = '/root/xxxxxx/demo-bin-picking'
124
+ bop_dataset_item = bop_dataset(dataset_path)
125
+ CUDA_DEVICE = '0'
126
+ # show_op = False
127
+ show_op = True
128
+ Tester_item = Tester(bop_dataset_item, show_op = show_op, CUDA_DEVICE=CUDA_DEVICE)
129
+ obj_id = 1
130
+ for name in ['IMG_20251007_165718']:
131
+ file_name = '/root/xxxxxx/test_imgs/%s.jpg'%name
132
+ image = cv2.cvtColor(cv2.imread(file_name), cv2.COLOR_RGB2BGR)
133
+ cam_K = np.array([
134
+ [2.83925618e+03, 0.00000000e+00, 2.02288638e+03],
135
+ [0.00000000e+00, 2.84037288e+03, 1.53940473e+03],
136
+ [0.00000000e+00, 0.00000000e+00, 1.00000000e+00],
137
+ ])
138
+ results_dict = Tester_item.perdict(cam_K, image, [obj_id],
139
+ conf = 0.85, confidence_threshold = 0.85)
140
+ cv2.imwrite(file_name.replace('.jpg','_show_2d.jpg'), results_dict['show_2D_results'])
141
+ cv2.imwrite(file_name.replace('.jpg','_show_6d_vis0.jpg'), results_dict['show_6D_vis0'])
142
+ cv2.imwrite(file_name.replace('.jpg','_show_6d_vis1.jpg'), results_dict['show_6D_vis1'])
143
+ cv2.imwrite(file_name.replace('.jpg','_show_6d_vis2.jpg'), results_dict['show_6D_vis2'])
144
+ pass
145
+ ```
146
+
147
+ ### 🎯 Visualization Results
148
+
149
+ 2D Detection Result (_show_2d.jpg):
150
+
151
+ <div align="center"> <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/IMG_20251007_165718_show_2d.jpg" width="40%"> </div>
152
+
153
+ ---
154
+
155
+ Network Outputs:
156
+
157
+ - HCCE-based front and back surface coordinate encodings
158
+
159
+ - Object mask
160
+
161
+ - Decoded 3D coordinate visualizations
162
+
163
+ <div align="center"> <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/IMG_20251007_165718_show_6d_vis0.jpg" width="100%">
164
+ <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/IMG_20251007_165718_show_6d_vis1.jpg" width="100%"> </div>
165
+
166
+ ---
167
+
168
+ ## 🏆 BOP LeaderBoards
169
+ ### <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/bop-6D-loc.png" width=100%>
170
+ ### <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/bop-2D-seg.png" width=100%>