Improve dataset card: Add task category, paper link, introduction, features, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +165 -1
README.md CHANGED
@@ -1,4 +1,168 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
3
  ---
4
- https://github.com/WangYuLin-SEU/HCCEPose
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - image-to-3d
5
+ tags:
6
+ - pose-estimation
7
+ - 6d-pose-estimation
8
+ - 3d-reconstruction
9
+ - object-detection
10
  ---
11
+
12
+ # HccePose (BF) Dataset
13
+
14
+ This repository contains the dataset and resources associated with the paper [HccePose(BF): Predicting Front & Back Surfaces to Construct Ultra-Dense 2D-3D Correspondences for Pose Estimation](https://huggingface.co/papers/2510.10177).
15
+
16
+ **Code:** [https://github.com/WangYuLin-SEU/HCCEPose](https://github.com/WangYuLin-SEU/HCCEPose)
17
+
18
+ <p align="center">
19
+ <a href="https://arxiv.org/abs/2510.10177">
20
+ <img src="https://img.shields.io/badge/arXiv-2510.10177-B31B1B.svg?logo=arxiv" alt="arXiv">
21
+ </a>
22
+ <a href="https://huggingface.co/datasets/SEU-WYL/HccePose">
23
+ <img src="https://img.shields.io/badge/HuggingFace-HccePose-FFD21E.svg?logo=huggingface&logoColor=white" alt="HuggingFace">
24
+ </a>
25
+ </p>
26
+
27
+ ## 🧩 Introduction
28
+ HccePose represents the state-of-the-art method for 6D object pose estimation based on a single RGB image. It introduces a **Hierarchical Continuous Coordinate Encoding (HCCE)** scheme, which encodes the three coordinate components of object surface points into hierarchical continuous codes. Through this hierarchical encoding, the neural network can effectively learn the correspondence between 2D image features and 3D surface coordinates of the object.
29
+
30
+ In the pose estimation process, the network trained with HCCE predicts the 3D surface coordinates of the object from a single RGB image, which are then used in a **Perspective-n-Point (PnP)** algorithm to solve for the 6D pose. Unlike traditional methods that only learn the visible front surface of objects, **HccePose(BF)** additionally learns the 3D coordinates of the back surface, thereby constructing denser 2D–3D correspondences and significantly improving pose estimation accuracy.
31
+
32
+ It is noteworthy that **HccePose(BF)** not only achieves high-precision 6D pose estimation but also delivers state-of-the-art performance in 2D segmentation from a single RGB image. The continuous and hierarchical nature of HCCE enhances the network’s ability to learn accurate object masks, offering substantial advantages over existing methods.
33
+ ### <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/fig2.jpg" width=100%>
34
+
35
+ ## 🚀 Features
36
+ ### 🔹 Object Preprocessing
37
+ - Object renaming and centering
38
+ - Rotation symmetry calibration (8 symmetry types) based on [**KASAL**](https://github.com/WangYuLin-SEU/KASAL)
39
+ - Export to [**BOP format**](https://github.com/thodan/bop_toolkit)
40
+
41
+ ### 🔹 Training Data Preparation
42
+ - Synthetic data generation and rendering using [**BlenderProc**](https://github.com/DLR-RM/BlenderProc)
43
+
44
+ ### 🔹 2D Detection
45
+ - Label generation and model training using [**Ultralytics**](https://github.com/ultralytics)
46
+
47
+ ### 🔹 6D Pose Estimation
48
+ - Preparation of **front** and **back** surface 3D coordinate labels
49
+ - Distributed training (DDP) implementation of **HccePose**
50
+ - Testing and visualization via **Dataloader**
51
+ - **HccePose (YOLOv11)** inference and visualization on:
52
+ - Single RGB images
53
+ - RGB videos
54
+
55
+ ## ✏️ Quick Start
56
+
57
+ This project provides a simple **HccePose-based** application example for the **Bin-Picking** task.
58
+ To reduce reproduction difficulty, both the objects (3D printed with standard white PLA material) and the camera (Xiaomi smartphone) are easily accessible devices.
59
+
60
+ You can:
61
+ - Print the sample object multiple times
62
+ - Randomly place the printed objects
63
+ - Capture photos freely using your phone
64
+ - Directly perform **2D detection**, **2D segmentation**, and **6D pose estimation** using the pretrained weights provided in this project
65
+
66
+ ---
67
+
68
+ ### 📦 Example Files
69
+ > Please keep the folder hierarchy unchanged.
70
+
71
+ | Type | Resource Link |
72
+ |------|----------------|
73
+ | 🎨 Object 3D Models | [models](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/demo-bin-picking/models) |
74
+ | 📁 YOLOv11 Weights | [yolo11](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/demo-bin-picking/yolo11) |
75
+ | 📂 HccePose Weights | [HccePose](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/demo-bin-picking/HccePose) |
76
+ | 🖼️ Test Images | [test_imgs](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/test_imgs) |
77
+ | 🎥 Test Videos | [test_videos](https://huggingface.co/datasets/SEU-WYL/HccePose/tree/main/test_videos) |
78
+
79
+ > ⚠️ Note:
80
+ Files beginning with `train_` are only required for training.
81
+ For this **Quick Start** section, only the above test files are needed.
82
+
83
+ ---
84
+
85
+ ### 📸 Sample Usage
86
+
87
+ This example demonstrates how to perform 6D pose estimation using the provided pretrained weights and an example image from this repository.
88
+
89
+ First, ensure you have the required environment set up as described in the [GitHub repository's `Environment Setup` section](https://github.com/WangYuLin-SEU/HCCEPose#%EF%B8%8F-environment-setup).
90
+
91
+ Then, you can use the following Python script. Make sure to adjust `dataset_path` to point to the local directory where you have cloned or downloaded the contents of this Hugging Face repository.
92
+
93
+ Example input image 👇
94
+ <div align="center">
95
+ <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/test_imgs/IMG_20251007_165718.jpg" width="40%">
96
+ </div>
97
+
98
+ Source image: [Example Link](https://github.com/WangYuLin-SEU/HCCEPose/blob/main/test_imgs/IMG_20251007_165718.jpg)
99
+
100
+ ```python
101
+ import cv2
102
+ import numpy as np
103
+ from HccePose.tester import Tester
104
+ from HccePose.bop_loader import bop_dataset
105
+ import os
106
+
107
+ if __name__ == '__main__':
108
+ # Adjust this path to where you have cloned/downloaded the Hugging Face dataset repository.
109
+ # For example, if you cloned the repo to './HccePose', then:
110
+ base_repo_path = '.' # Assuming script is run from the root of the cloned HF repo
111
+
112
+ dataset_path = os.path.join(base_repo_path, 'demo-bin-picking')
113
+ bop_dataset_item = bop_dataset(dataset_path)
114
+
115
+ CUDA_DEVICE = '0' # Specify your CUDA device if available, 'cpu' otherwise
116
+ show_op = True # Set to True to display visualizations
117
+
118
+ Tester_item = Tester(bop_dataset_item, show_op=show_op, CUDA_DEVICE=CUDA_DEVICE)
119
+
120
+ obj_id = 1
121
+ image_name = 'IMG_20251007_165718'
122
+ image_file_path = os.path.join(base_repo_path, 'test_imgs', f'{image_name}.jpg')
123
+
124
+ if not os.path.exists(image_file_path):
125
+ print(f"Error: Image file not found at {image_file_path}. Please ensure the HF dataset is downloaded correctly.")
126
+ else:
127
+ image = cv2.cvtColor(cv2.imread(image_file_path), cv2.COLOR_RGB2BGR)
128
+
129
+ # Example camera intrinsics (from GitHub README)
130
+ cam_K = np.array([
131
+ [2.83925618e+03, 0.00000000e+00, 2.02288638e+03],
132
+ [0.00000000e+00, 2.84037288e+03, 1.53940473e+03],
133
+ [0.00000000e+00, 0.00000000e+00, 1.00000000e+00],
134
+ ])
135
+
136
+ results_dict = Tester_item.perdict(cam_K, image, [obj_id],
137
+ conf=0.85, confidence_threshold=0.85)
138
+
139
+ # Save visualization results to an 'output_results' directory
140
+ output_dir = './output_results'
141
+ os.makedirs(output_dir, exist_ok=True)
142
+ cv2.imwrite(os.path.join(output_dir, f'{image_name}_show_2d.jpg'), results_dict['show_2D_results'])
143
+ cv2.imwrite(os.path.join(output_dir, f'{image_name}_show_6d_vis0.jpg'), results_dict['show_6D_vis0'])
144
+ cv2.imwrite(os.path.join(output_dir, f'{image_name}_show_6d_vis1.jpg'), results_dict['show_6D_vis1'])
145
+ cv2.imwrite(os.path.join(output_dir, f'{image_name}_show_6d_vis2.jpg'), results_dict['show_6D_vis2'])
146
+ print(f"Results saved to {output_dir}")
147
+ ```
148
+
149
+ ### 🎯 Visualization Results
150
+
151
+ 2D Detection Result (_show_2d.jpg):
152
+
153
+ <div align="center"> <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/IMG_20251007_165718_show_2d.jpg" width="40%"> </div>
154
+
155
+ ---
156
+
157
+ Network Outputs:
158
+
159
+ - HCCE-based front and back surface coordinate encodings
160
+
161
+ - Object mask
162
+
163
+ - Decoded 3D coordinate visualizations
164
+
165
+ <div align="center"> <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/IMG_20251007_165718_show_6d_vis0.jpg" width="100%">
166
+ <img src="https://github.com/WangYuLin-SEU/HCCEPose/blob/main/show_vis/IMG_20251007_165718_show_6d_vis1.jpg" width="100%"> </div>
167
+
168
+ ---