Add 1 files
Browse files- 2206/2206.07162.md +412 -0
2206/2206.07162.md
ADDED
|
@@ -0,0 +1,412 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2206.07162
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
Yumeng Li 1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT Ning Gao 1,2 1 2{}^{1,2}start_FLOATSUPERSCRIPT 1 , 2 end_FLOATSUPERSCRIPT 1 1 footnotemark: 1 Hanna Ziesche 1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT Gerhard Neumann 2 2{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT
|
| 7 |
+
|
| 8 |
+
1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT Bosch Center for Artificial Intelligence 2 2{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT Autonomous Learning Robots, KIT
|
| 9 |
+
|
| 10 |
+
{yumeng.li, ning.gao, hanna.ziesche}@de.bosch.com gerhard.neumann@kit.edu
|
| 11 |
+
|
| 12 |
+
###### Abstract
|
| 13 |
+
|
| 14 |
+
We present a novel meta-learning approach for 6D pose estimation on unknown objects. In contrast to “instance-level” and “category-level” pose estimation methods, our algorithm learns object representation in a category-agnostic way, which endows it with strong generalization capabilities across object categories. Specifically, we employ a neural process-based meta-learning approach to train an encoder to capture texture and geometry of an object in a latent representation, based on very few RGB-D images and ground-truth keypoints. The latent representation is then used by a simultaneously meta-trained decoder to predict the 6D pose of the object in new images. Furthermore, we propose a novel geometry-aware decoder for the keypoint prediction using a Graph Neural Network (GNN), which explicitly takes geometric constraints specific to each object into consideration. To evaluate our algorithm, extensive experiments are conducted on the LineMOD dataset, and on our new fully-annotated synthetic datasets generated from Multiple Categories in Multiple Scenes (MCMS). Experimental results demonstrate that our model performs well on unseen objects with very different shapes and appearances. Remarkably, our model also shows robust performance on occluded scenes although trained fully on data without occlusion. To our knowledge, this is the first work exploring cross-category level 6D pose estimation.
|
| 15 |
+
|
| 16 |
+
1 Introduction
|
| 17 |
+
--------------
|
| 18 |
+
|
| 19 |
+
Estimating the 6D pose of an object is of practical interest for many real-world applications such as robotic grasping, autonomous driving and augmented reality (AR). Prior work has investigated instance-level 6D pose estimation[[63](https://arxiv.org/html/2206.07162#bib.bib63), [45](https://arxiv.org/html/2206.07162#bib.bib45), [27](https://arxiv.org/html/2206.07162#bib.bib27), [26](https://arxiv.org/html/2206.07162#bib.bib26)], where the objects are predefined. Although achieving satisfying performance, these methods are prone to overfit to specific objects and thus suffer from poor generalization. Due to the high variety of objects with different colors and shapes in the real-world, it is impractical to retrain the model every time new objects come in, which is time-consuming and data inefficient.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Illustration of the difference between traditional instance-level 6D pose estimation methods and our approach. Unlike other methods, our proposed approach generalizes to novel objects given a few context observations. The projected ground-truth keypoints are visualized as blue points in the context images. The predicted segmentation and keypoints are visualized in the target images.
|
| 24 |
+
|
| 25 |
+
Recently, this issue has raised increasing attention in the community and several approaches[[64](https://arxiv.org/html/2206.07162#bib.bib64), [4](https://arxiv.org/html/2206.07162#bib.bib4), [62](https://arxiv.org/html/2206.07162#bib.bib62), [7](https://arxiv.org/html/2206.07162#bib.bib7), [8](https://arxiv.org/html/2206.07162#bib.bib8), [5](https://arxiv.org/html/2206.07162#bib.bib5)] have been proposed for category-level 6D pose estimation. NOCS[[64](https://arxiv.org/html/2206.07162#bib.bib64)] and CASS[[4](https://arxiv.org/html/2206.07162#bib.bib4)], for example, map different instances of each category into a unified representational space based on RGB or RGB-D features. However, the assumption of a unified space potentially leads to a decrease in performance in case of strong object variations. FS-Net[[7](https://arxiv.org/html/2206.07162#bib.bib7)] proposes an orientation-aware autoencoder with 3D graph convolutions for latent feature extraction where translation and scale are estimated using a tiny PointNet[[48](https://arxiv.org/html/2206.07162#bib.bib48)]. Furthermore, Chen _et al_.[[8](https://arxiv.org/html/2206.07162#bib.bib8)] provide an alternative based on “analysis-by-synthesis” to train a pose-aware image generator, implicitly representing the appearance, shape and pose of the entire object categories. However, these methods require a pretrained object detector on each specific category which limits their generalization ability across categories.
|
| 26 |
+
|
| 27 |
+
In this paper, we present a new meta-learning based approach to increase the generalization capability of 6D pose estimation. To our knowledge, this is the first work that allows generalization across object categories. The main idea of our method lies in meta-learning object-centric representations in a category-agnostic way. Meta-learning aims to adapt rapidly to new tasks based only on a few examples. More specifically, we employ Conditional Neural Processes (CNPs)[[21](https://arxiv.org/html/2206.07162#bib.bib21)] to learn a latent representation of objects, capturing the generic appearance and geometry. Inference on new objects then merely needs a few labeled examples as input to extract a respective representation. In particular, fine-tuning on new objects is not necessary. A comparison between traditional instance-level approaches and ours is illustrated in [Fig.1](https://arxiv.org/html/2206.07162#S1.F1 "Figure 1 ‣ 1 Introduction ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation").
|
| 28 |
+
|
| 29 |
+
For feature extraction, we use FFB6D[[26](https://arxiv.org/html/2206.07162#bib.bib26)], which learns representative features through a fusion network based on RGB-D images. However, instead of directly using the extracted features for downstream applications, i.e.,segmentation and keypoint offsets prediction, we add CNP on top of the fusion network to further meta-learn a latent representation for each object. CNP takes in the representative features from a set of context images of an object, together with their ground-truth labels, and yields a latent representation. The subsequent predictions for new target images are conditioned on this latent representation.
|
| 30 |
+
|
| 31 |
+
To further leverage the object geometry and improve the keypoint prediction, we propose a novel GNN-based decoder which takes predefined canonical keypoints in the object’s reference frame as an additional input and encodes local spatial constraints via message passing among the keypoints. Note that the additional input to the GNN does not require any further annotations on top of those existing datasets used by prior keypoint-based methods. The proposed pipeline is illustrated in [Fig.2](https://arxiv.org/html/2206.07162#S1.F2 "Figure 2 ‣ 1 Introduction ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation").
|
| 32 |
+
|
| 33 |
+
Due to the lack of available data for cross-category level 6D pose estimation, we generate our own synthetic dataset for M ultiple C ategories in M ultiple S cenes (MCMS) using objects from ShapeNet[[3](https://arxiv.org/html/2206.07162#bib.bib3)] and extending the open-source rendering pipeline[[10](https://arxiv.org/html/2206.07162#bib.bib10)] with online occlusion and truncation checks. This provides us with the flexibility to generate datasets with limited and considerable occlusion respectively.
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
Figure 2: Schematic pipeline of our approach.
|
| 38 |
+
|
| 39 |
+
In summary, the main contributions of this work are as follows:
|
| 40 |
+
|
| 41 |
+
* •
|
| 42 |
+
We introduce a novel meta-learning framework for 6D pose estimation with strong generalization ability on unseen objects within and across object categories.
|
| 43 |
+
|
| 44 |
+
* •
|
| 45 |
+
We propose a GNN-based keypoint prediction module that leverages geometric information from canonical keypoint coordinates and captures local spatial constraints among keypoints via message passing.
|
| 46 |
+
|
| 47 |
+
* •
|
| 48 |
+
We provide fully-annotated synthetic datasets with abundant diversity, which facilitate future research on intra- and cross-category level 6D pose estimation.
|
| 49 |
+
|
| 50 |
+
2 Related Work
|
| 51 |
+
--------------
|
| 52 |
+
|
| 53 |
+
6D Pose Estimation. For instance-level 6D pose estimation, methods can be categorized into three classes: correspondence-based, template-based and voting-based methods[[11](https://arxiv.org/html/2206.07162#bib.bib11)]. Correspondence-based methods aim to find 2D-3D correspondences[[70](https://arxiv.org/html/2206.07162#bib.bib70), [54](https://arxiv.org/html/2206.07162#bib.bib54), [47](https://arxiv.org/html/2206.07162#bib.bib47)] or 3D-3D correspondences[[17](https://arxiv.org/html/2206.07162#bib.bib17)]. Template-based methods, on the other hand, match the inputs to templates, which can be either explicit pose-aware images[[28](https://arxiv.org/html/2206.07162#bib.bib28), [29](https://arxiv.org/html/2206.07162#bib.bib29)] or templates learned implicitly by neural networks[[56](https://arxiv.org/html/2206.07162#bib.bib56)]. Voting-based approaches[[45](https://arxiv.org/html/2206.07162#bib.bib45), [27](https://arxiv.org/html/2206.07162#bib.bib27), [26](https://arxiv.org/html/2206.07162#bib.bib26)] generate voting candidates from feature representations, after which the RANSAC algorithm[[18](https://arxiv.org/html/2206.07162#bib.bib18)] or a clustering mechanism such as MeanShift[[9](https://arxiv.org/html/2206.07162#bib.bib9)] is applied for selecting the best candidates. Our feature extractor, FFB6D[[26](https://arxiv.org/html/2206.07162#bib.bib26)], falls into this latter category. FFB6D proposes a bidirectional fusion module to combine appearance and geometry information for feature learning. The extracted features are then used to predict per-point semantic labels and keypoint offsets, after which MeanShift is used to vote for 3D keypoints. Finally, the keypoints are used to predict the final 6D pose by Least-Squares Fitting[[1](https://arxiv.org/html/2206.07162#bib.bib1)].
|
| 54 |
+
|
| 55 |
+
Recently, category-level 6D object pose estimation has gained increasing attention[[64](https://arxiv.org/html/2206.07162#bib.bib64), [4](https://arxiv.org/html/2206.07162#bib.bib4), [62](https://arxiv.org/html/2206.07162#bib.bib62), [7](https://arxiv.org/html/2206.07162#bib.bib7), [8](https://arxiv.org/html/2206.07162#bib.bib8)]. Wang _et al_.[[64](https://arxiv.org/html/2206.07162#bib.bib64)] share a canonical representation for all possible object instances within a category using Normalized Object Coordinate Space (NOCS). However, inferring the object pose by predicting only the NOCS representation is not easy given large intra-category variations[[14](https://arxiv.org/html/2206.07162#bib.bib14)]. To tackle this problem, [[58](https://arxiv.org/html/2206.07162#bib.bib58)] accounts for intra-category shape variations by explicitly modeling the deformation from shape prior to object model while CASS[[4](https://arxiv.org/html/2206.07162#bib.bib4)] generates 3D point clouds in the canonical space using a variational autoencoder (VAE). FS-Net[[7](https://arxiv.org/html/2206.07162#bib.bib7)] proposes a shape-based model using 3D graph convolutions and a decoupled rotation mechanism to further reduce the sensitivity of RGB features to the color variations. However, these methods model the feature space explicitly on a category-level and therefore have a limited generalization ability across categories. By contrast, our method learns 6D pose estimation in a category-agnostic manner and can handle new objects from unseen categories.
|
| 56 |
+
|
| 57 |
+
Meta-Learning. Meta-learning, also known as learning to learn, aims to acquire meta knowledge that can help the model to quickly adapt to new tasks with very few samples. In general, meta-learning can be categorized into metric-based[[60](https://arxiv.org/html/2206.07162#bib.bib60), [53](https://arxiv.org/html/2206.07162#bib.bib53), [57](https://arxiv.org/html/2206.07162#bib.bib57)], optimization-based[[15](https://arxiv.org/html/2206.07162#bib.bib15), [43](https://arxiv.org/html/2206.07162#bib.bib43), [16](https://arxiv.org/html/2206.07162#bib.bib16)] and model-based[[50](https://arxiv.org/html/2206.07162#bib.bib50), [21](https://arxiv.org/html/2206.07162#bib.bib21), [22](https://arxiv.org/html/2206.07162#bib.bib22), [32](https://arxiv.org/html/2206.07162#bib.bib32)] methods. Many meta-learning approaches have been applied to computer vision applications, e.g.,few-shot image classification[[24](https://arxiv.org/html/2206.07162#bib.bib24), [71](https://arxiv.org/html/2206.07162#bib.bib71), [59](https://arxiv.org/html/2206.07162#bib.bib59), [39](https://arxiv.org/html/2206.07162#bib.bib39)], vision regression[[20](https://arxiv.org/html/2206.07162#bib.bib20)], object detection[[46](https://arxiv.org/html/2206.07162#bib.bib46), [12](https://arxiv.org/html/2206.07162#bib.bib12), [13](https://arxiv.org/html/2206.07162#bib.bib13), [72](https://arxiv.org/html/2206.07162#bib.bib72), [6](https://arxiv.org/html/2206.07162#bib.bib6)], robotic grasping[[19](https://arxiv.org/html/2206.07162#bib.bib19)], semantic segmentation[[52](https://arxiv.org/html/2206.07162#bib.bib52), [36](https://arxiv.org/html/2206.07162#bib.bib36), [44](https://arxiv.org/html/2206.07162#bib.bib44), [73](https://arxiv.org/html/2206.07162#bib.bib73)] and 3D reconstruction[[61](https://arxiv.org/html/2206.07162#bib.bib61), [42](https://arxiv.org/html/2206.07162#bib.bib42)]. Our work is based on Neural Processes (NPs)[[22](https://arxiv.org/html/2206.07162#bib.bib22), [32](https://arxiv.org/html/2206.07162#bib.bib32), [41](https://arxiv.org/html/2206.07162#bib.bib41), [25](https://arxiv.org/html/2206.07162#bib.bib25), [34](https://arxiv.org/html/2206.07162#bib.bib34)], which fall into the category of model-based meta-learning approaches. NPs have shown promising performance on simple tasks like function regression and image completion. However, their application to 6D pose estimation has not yet been explored properly. We introduce CNP[[21](https://arxiv.org/html/2206.07162#bib.bib21)] to this problem in order to tackle the issue of poor generalization ability of existing methods on both intra- and cross-category level.
|
| 58 |
+
|
| 59 |
+
Graph Neural Networks. Graph neural networks (GNNs) have been widely applied on vision applications, such as image classification[[35](https://arxiv.org/html/2206.07162#bib.bib35), [31](https://arxiv.org/html/2206.07162#bib.bib31), [40](https://arxiv.org/html/2206.07162#bib.bib40)], semantic segmentation[[49](https://arxiv.org/html/2206.07162#bib.bib49), [33](https://arxiv.org/html/2206.07162#bib.bib33), [66](https://arxiv.org/html/2206.07162#bib.bib66), [37](https://arxiv.org/html/2206.07162#bib.bib37)], and object detection[[30](https://arxiv.org/html/2206.07162#bib.bib30), [51](https://arxiv.org/html/2206.07162#bib.bib51), [67](https://arxiv.org/html/2206.07162#bib.bib67)]. Recently, many works start using GNNs on human pose estimation[[65](https://arxiv.org/html/2206.07162#bib.bib65), [2](https://arxiv.org/html/2206.07162#bib.bib2), [69](https://arxiv.org/html/2206.07162#bib.bib69)]. Yang _et al_.[[69](https://arxiv.org/html/2206.07162#bib.bib69)] derive the pose dynamics from historical pose tracklets through a GNN which accounts for both spatio-temporal and visual information while PGCN[[2](https://arxiv.org/html/2206.07162#bib.bib2)] builds a directed graph over the keypoints of the human body to explicitly model their correlations. DEKR[[23](https://arxiv.org/html/2206.07162#bib.bib23)] adopts a pixel-wise spatial transformer to concentrate on information from pixels in the keypoint regions and dedicated adaptive convolutions to further disentangle the representation. Our approach is based on a similar idea as PGCN, where we take the keypoints in the canonical object coordinates as an additional input in order to leverage the spatial constraints between keypoints. We show that this drastically increases the performance on unseen objects and robustness on occluded scenes.
|
| 60 |
+
|
| 61 |
+
3 Preliminary - Conditional Neural Processes
|
| 62 |
+
--------------------------------------------
|
| 63 |
+
|
| 64 |
+
Conditional Neural Processes (CNPs)[[21](https://arxiv.org/html/2206.07162#bib.bib21)] can be interpreted as conditional models that perform inference for some target inputs x t subscript 𝑥 𝑡 x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT conditioned on observations, called “contexts”. These contexts consist of inputs x c subscript 𝑥 𝑐 x_{c}italic_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT and corresponding labels y c subscript 𝑦 𝑐 y_{c}italic_y start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT originating from one specific task. Note that in our case, each distinct object is considered as a task.
|
| 65 |
+
|
| 66 |
+
The basic form of CNP comprises three core components: encoder, aggregator and decoder. The encoder takes a set of M c subscript 𝑀 𝑐 M_{c}italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT context pairs from a given task C={(x c i,y c i)}i=1 M c 𝐶 superscript subscript superscript subscript 𝑥 𝑐 𝑖 superscript subscript 𝑦 𝑐 𝑖 𝑖 1 subscript 𝑀 𝑐 C=\{(x_{c}^{i},y_{c}^{i})\}_{i=1}^{M_{c}}italic_C = { ( italic_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and extracts embeddings from each context pair respectively, r i=h θ(x c i,y c i),∀(x c i,y c i)∈C formulae-sequence subscript 𝑟 𝑖 subscript ℎ 𝜃 superscript subscript 𝑥 𝑐 𝑖 superscript subscript 𝑦 𝑐 𝑖 for-all superscript subscript 𝑥 𝑐 𝑖 superscript subscript 𝑦 𝑐 𝑖 𝐶 r_{i}=h_{\theta}(x_{c}^{i},y_{c}^{i}),\ \forall(x_{c}^{i},y_{c}^{i})\in C italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) , ∀ ( italic_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) ∈ italic_C, where h ℎ h italic_h is a neural network parameterized by θ 𝜃\theta italic_θ. Afterwards, the aggregator a 𝑎 a italic_a summarizes these embeddings using a permutation invariant operator ⊗tensor-product\otimes⊗ and yields the global latent variable as task representation: z=a(r 1,r 2,…,r M c)=r 1⊗r 2⊗…⊗r M c 𝑧 𝑎 subscript 𝑟 1 subscript 𝑟 2…subscript 𝑟 subscript 𝑀 𝑐 tensor-product subscript 𝑟 1 subscript 𝑟 2…subscript 𝑟 subscript 𝑀 𝑐 z=a(r_{1},r_{2},...,r_{M_{c}})=r_{1}\otimes r_{2}\otimes...\otimes r_{M_{c}}italic_z = italic_a ( italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_r start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_r start_POSTSUBSCRIPT italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) = italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⊗ italic_r start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⊗ … ⊗ italic_r start_POSTSUBSCRIPT italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUBSCRIPT. Since the size of context set M c subscript 𝑀 𝑐 M_{c}italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT varies and the task representation has to be independent of the order of contexts, a permutation invariant mechanism is essential. Max aggregation is used in our model as we empirically find it outperforms mean aggregation, which is used in the original CNP. Finally, the decoder performs predictions for a set of target inputs T={x t i}i=1 M t 𝑇 superscript subscript superscript subscript 𝑥 𝑡 𝑖 𝑖 1 subscript 𝑀 𝑡 T=\{x_{t}^{i}\}_{i=1}^{M_{t}}italic_T = { italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT conditioned on the corresponding task representation z 𝑧 z italic_z extracted and aggregated before: y^t i=g ϕ(x t i,z),∀x t i∈T formulae-sequence superscript subscript^𝑦 𝑡 𝑖 subscript 𝑔 italic-ϕ superscript subscript 𝑥 𝑡 𝑖 𝑧 for-all superscript subscript 𝑥 𝑡 𝑖 𝑇\hat{y}_{t}^{i}=g_{\phi}(x_{t}^{i},z),\ \forall x_{t}^{i}\in T over^ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT = italic_g start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_z ) , ∀ italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∈ italic_T. M t subscript 𝑀 𝑡 M_{t}italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the number of target inputs, g 𝑔 g italic_g denotes the decoder, a neural network parametrized by ϕ italic-ϕ\phi italic_ϕ.
|
| 67 |
+
|
| 68 |
+
The ability to extract meaningful latent representation from very few samples renders CNP well-suited for our purposes. Due to the fact that each distinct object comes with different predefined keypoints, prior keypoint-based methods for 6D pose estimation do not generalize well to novel objects. Meta-training CNP to extract latent keypoint representations from object features, however, allows us to overcome this difficulty.
|
| 69 |
+
|
| 70 |
+
4 Approach
|
| 71 |
+
----------
|
| 72 |
+
|
| 73 |
+
In this paper, we propose a keypoint-based meta-learning approach for 6D pose estimation on unseen objects. Given an RGB-D image, the goal of 6D pose estimation is to calculate the rigid transformation [R;t]𝑅 𝑡[R;t][ italic_R ; italic_t ] from the object coordinates to the camera coordinates, where R∈SO(3)𝑅 𝑆 𝑂 3 R\in SO(3)italic_R ∈ italic_S italic_O ( 3 ) represents the rotation matrix and t∈ℝ 3 𝑡 superscript ℝ 3 t\in\mathbb{R}^{3}italic_t ∈ blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT represents the translation vector. We build on keypoint-based methods, that first predict the location of keypoints in camera coordinates from input RGB-D images and then regress the transformation between these and predefined keypoints in the object coordinates. The predefined keypoints in canonical object coordinates are thereby fixed beforehand, e.g.,using the Farthest Point Sampling (FPS) algorithm on the object mesh.
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
|
| 77 |
+
Figure 3: Overview of the three stages of our method. a) The feature extractor takes RGB-D images as inputs and produces point-wise features for a set of M c subscript 𝑀 𝑐 M_{c}italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT (M t subscript 𝑀 𝑡 M_{t}italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) points subsampled from the input context (target) image. b) The meta-learner (grey shaded area) encodes and aggregates the features of several context images into two latent variables z kp subscript 𝑧 𝑘 𝑝 z_{kp}italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT and z seg subscript 𝑧 𝑠 𝑒 𝑔 z_{seg}italic_z start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT. The segmentation module (blue shaded area) predicts a binary semantic label for each of the M t subscript 𝑀 𝑡 M_{t}italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT feature points of a target image conditioned on the latent representations z seg subscript 𝑧 𝑠 𝑒 𝑔 z_{seg}italic_z start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT, indicating whether the respective point belongs to the queried object. The keypoint decoder predicts per-point offsets for each keypoint based on the segmented features and the keypoint latent variables z kp subscript 𝑧 𝑘 𝑝 z_{kp}italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT. c) Lastly, 6D pose parameters are computed via voting and least-squares fitting.
|
| 78 |
+
|
| 79 |
+
### 4.1 Overview
|
| 80 |
+
|
| 81 |
+
We consider 6D pose estimation in three stages: feature extraction, keypoint detection and pose fitting. At the first stage, we employ the feature extractor FFB6D[[26](https://arxiv.org/html/2206.07162#bib.bib26)] to extract representative features from RGB-D images. For the second stage we use a CNP-based meta-learning approach. The flow of context and target samples through our model is shown in [Fig.3](https://arxiv.org/html/2206.07162#S4.F3 "Figure 3 ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"), where the context inputs for each task x c subscript 𝑥 𝑐 x_{c}italic_x start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, i.e.,the features extracted from the context RGB-D images, and the correpsonding labels y c subscript 𝑦 𝑐 y_{c}italic_y start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT are used jointly to distill a task representation. This representation serves as prior knowledge for the subsequent prediction on target inputs x t subscript 𝑥 𝑡 x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. We use two decoders in our meta-learning framework, predicting semantic labels and 3D keypoint offsets respectively. Furthermore, we propose a novel geometry-aware decoder using a GNN for the keypoint offsets prediction, which explicitly models the spatial constraints between the keypoints. Finally, the 6D pose parameters are regressed by least-squares fitting at the third stage.
|
| 82 |
+
|
| 83 |
+
### 4.2 Feature Extraction
|
| 84 |
+
|
| 85 |
+
For feature extraction we rely on the fusion network FFB6D[[26](https://arxiv.org/html/2206.07162#bib.bib26)] which combines appearance and geometry information from RGB-D images and extracts representative features for a subset of seed points sampled from the input depth images. Therefore, the output is a set of per-point features corresponding to the sampled seed points.
|
| 86 |
+
|
| 87 |
+
### 4.3 Meta-Learner for Keypoint Detection
|
| 88 |
+
|
| 89 |
+
Two steps are involved in the keypoint estimation procedure: segmentation of the queried object and keypoint detection, which both rely on a preceding extraction of latent representations.
|
| 90 |
+
|
| 91 |
+
Extraction of latent representations. Identifying and distinguishing a novel object from a multi-object scene and extracting its keypoints requires modules, which are conditioned on the latent representation of the queried object. In order to obtain such a latent representation, we need a set of context samples {(x c,i,y c,i)}i=1 M c superscript subscript subscript 𝑥 𝑐 𝑖 subscript 𝑦 𝑐 𝑖 𝑖 1 subscript 𝑀 𝑐\{(x_{c,i},y_{c,i})\}_{i=1}^{M_{c}}{ ( italic_x start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. Here x c,i subscript 𝑥 𝑐 𝑖 x_{c,i}italic_x start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT denotes the per-point features extracted in the first stage from context images and y c,i={y c,i u}u=1 M K subscript 𝑦 𝑐 𝑖 superscript subscript superscript subscript 𝑦 𝑐 𝑖 𝑢 𝑢 1 subscript 𝑀 𝐾 y_{c,i}=\{y_{c,i}^{u}\}_{u=1}^{M_{K}}italic_y start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT = { italic_y start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_u = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT end_POSTSUPERSCRIPT is the ground-truth label where y c,i u={y of u,y seg}c,i superscript subscript 𝑦 𝑐 𝑖 𝑢 subscript superscript subscript 𝑦 𝑜 𝑓 𝑢 subscript 𝑦 𝑠 𝑒 𝑔 𝑐 𝑖 y_{c,i}^{u}=\{y_{of}^{u},y_{seg}\}_{c,i}italic_y start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT = { italic_y start_POSTSUBSCRIPT italic_o italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT includes the 3D keypoint offsets y of u superscript subscript 𝑦 𝑜 𝑓 𝑢 y_{of}^{u}italic_y start_POSTSUBSCRIPT italic_o italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT between the seed point and predefined keypoint p u subscript 𝑝 𝑢 p_{u}italic_p start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT, and semantic label y seg∈{0,1}subscript 𝑦 𝑠 𝑒 𝑔 0 1 y_{seg}\in\{0,1\}italic_y start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT ∈ { 0 , 1 } indicating whether the seed point belongs to the queried object. Given a context sample as input, an encoder generates per-seed-point embeddings for each of the M k subscript 𝑀 𝑘 M_{k}italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT keypoints to be predicted:
|
| 92 |
+
|
| 93 |
+
r i u=h θ(x c,i⊕y c,i u),i=1,…,M c,u=1,…,M k,formulae-sequence superscript subscript 𝑟 𝑖 𝑢 subscript ℎ 𝜃 direct-sum subscript 𝑥 𝑐 𝑖 superscript subscript 𝑦 𝑐 𝑖 𝑢 formulae-sequence 𝑖 1…subscript 𝑀 𝑐 𝑢 1…subscript 𝑀 𝑘\displaystyle r_{i}^{u}=h_{\theta}(x_{c,i}\oplus y_{c,i}^{u}),\ i=1,...,M_{c},% \ u=1,...,M_{k},italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT = italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT ⊕ italic_y start_POSTSUBSCRIPT italic_c , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT ) , italic_i = 1 , … , italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_u = 1 , … , italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ,(1)
|
| 94 |
+
|
| 95 |
+
where M c subscript 𝑀 𝑐 M_{c}italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT denotes the number of seed points selected from each context image; M k subscript 𝑀 𝑘 M_{k}italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is the number of selected keypoints which in our case is 9. ⊕direct-sum\oplus⊕ stands for the concatenation operation, where the inputs are first broadcast to the same shape, if necessary. The obtained embeddings are next aggregated by max aggregation to first obtain a latent representation z kp u superscript subscript 𝑧 𝑘 𝑝 𝑢 z_{kp}^{u}italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT for each keypoint. A second aggregation over these keypoint representations is then applied in order to extract a representation z seg subscript 𝑧 𝑠 𝑒 𝑔 z_{seg}italic_z start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT for the segmentation task:
|
| 96 |
+
|
| 97 |
+
z kp u=max i=1 M c(r i u),u=1,…,M k,formulae-sequence superscript subscript 𝑧 𝑘 𝑝 𝑢 superscript subscript 𝑖 1 subscript 𝑀 𝑐 superscript subscript 𝑟 𝑖 𝑢 𝑢 1…subscript 𝑀 𝑘 z_{kp}^{u}=\max\limits_{i=1}^{M_{c}}(r_{i}^{u}),\ u=1,...,M_{k},italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT = roman_max start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT ) , italic_u = 1 , … , italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ,(2)
|
| 98 |
+
|
| 99 |
+
z seg=max u=1 M k(z kp u).subscript 𝑧 𝑠 𝑒 𝑔 superscript subscript 𝑢 1 subscript 𝑀 𝑘 superscript subscript 𝑧 𝑘 𝑝 𝑢 z_{seg}=\max\limits_{u=1}^{M_{k}}(z_{kp}^{u}).italic_z start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT = roman_max start_POSTSUBSCRIPT italic_u = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT ) .(3)
|
| 100 |
+
|
| 101 |
+
Conditional Segmentation. In the step described above, the model encapsulates relevant information (e.g.,shape and texture attributes) into the latent variable z seg subscript 𝑧 𝑠 𝑒 𝑔 z_{seg}italic_z start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT. This can then be used to identify and locate the queried object in the target images. The segmentation decoder g 𝒮 subscript 𝑔 𝒮 g_{\mathcal{S}}italic_g start_POSTSUBSCRIPT caligraphic_S end_POSTSUBSCRIPT takes the latent variable z seg subscript 𝑧 𝑠 𝑒 𝑔 z_{seg}italic_z start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT and features x t subscript 𝑥 𝑡 x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT extracted from the target images (see [Fig.3](https://arxiv.org/html/2206.07162#S4.F3 "Figure 3 ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation")) and predicts a semantic label for each seed point via a multi-layer perceptron (MLP):
|
| 102 |
+
|
| 103 |
+
y seg,i=g 𝒮(x t,i⊕z seg),i=1,…,M t,formulae-sequence subscript 𝑦 𝑠 𝑒 𝑔 𝑖 subscript 𝑔 𝒮 direct-sum subscript 𝑥 𝑡 𝑖 subscript 𝑧 𝑠 𝑒 𝑔 𝑖 1…subscript 𝑀 𝑡\displaystyle y_{seg,i}=g_{\mathcal{S}}(x_{t,i}\oplus z_{seg}),\ i=1,...,M_{t},italic_y start_POSTSUBSCRIPT italic_s italic_e italic_g , italic_i end_POSTSUBSCRIPT = italic_g start_POSTSUBSCRIPT caligraphic_S end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t , italic_i end_POSTSUBSCRIPT ⊕ italic_z start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT ) , italic_i = 1 , … , italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ,(4)
|
| 104 |
+
|
| 105 |
+
where M t subscript 𝑀 𝑡 M_{t}italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the number of seed points sampled from each target image, x t,i subscript 𝑥 𝑡 𝑖 x_{t,i}italic_x start_POSTSUBSCRIPT italic_t , italic_i end_POSTSUBSCRIPT denotes the corresponding extracted features. These per-point segmentation predictions y seg subscript 𝑦 𝑠 𝑒 𝑔 y_{seg}italic_y start_POSTSUBSCRIPT italic_s italic_e italic_g end_POSTSUBSCRIPT are then used to select only the seed point features x obj subscript 𝑥 𝑜 𝑏 𝑗 x_{obj}italic_x start_POSTSUBSCRIPT italic_o italic_b italic_j end_POSTSUBSCRIPT belonging to the queried object from x t subscript 𝑥 𝑡 x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT for the subsequent keypoint prediction.
|
| 106 |
+
|
| 107 |
+
Conditional Keypoint Offset Prediction. The keypoint offsets decoder g 𝒦 subscript 𝑔 ��� g_{\mathcal{K}}italic_g start_POSTSUBSCRIPT caligraphic_K end_POSTSUBSCRIPT takes the features extracted by the segmentation module along with the latent variables z kp subscript 𝑧 𝑘 𝑝 z_{kp}italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT as input and predicts translation offsets y of subscript 𝑦 𝑜 𝑓 y_{of}italic_y start_POSTSUBSCRIPT italic_o italic_f end_POSTSUBSCRIPT for each keypoint:
|
| 108 |
+
|
| 109 |
+
y of,i u superscript subscript 𝑦 𝑜 𝑓 𝑖 𝑢\displaystyle y_{of,i}^{u}italic_y start_POSTSUBSCRIPT italic_o italic_f , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT=g 𝒦(x obj,i⊕z kp u),absent subscript 𝑔 𝒦 direct-sum subscript 𝑥 𝑜 𝑏 𝑗 𝑖 superscript subscript 𝑧 𝑘 𝑝 𝑢\displaystyle=g_{\mathcal{K}}(x_{obj,i}\oplus z_{kp}^{u}),= italic_g start_POSTSUBSCRIPT caligraphic_K end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_o italic_b italic_j , italic_i end_POSTSUBSCRIPT ⊕ italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT ) ,
|
| 110 |
+
i=1,…,M obj,u=1,…,M k,formulae-sequence 𝑖 1…subscript 𝑀 𝑜 𝑏 𝑗 𝑢 1…subscript 𝑀 𝑘\displaystyle i=1,...,M_{obj},\ u=1,...,M_{k},italic_i = 1 , … , italic_M start_POSTSUBSCRIPT italic_o italic_b italic_j end_POSTSUBSCRIPT , italic_u = 1 , … , italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ,(5)
|
| 111 |
+
|
| 112 |
+
where M obj subscript 𝑀 𝑜 𝑏 𝑗 M_{obj}italic_M start_POSTSUBSCRIPT italic_o italic_b italic_j end_POSTSUBSCRIPT denotes the number of selected seed points on the queried object, x obj,i subscript 𝑥 𝑜 𝑏 𝑗 𝑖 x_{obj,i}italic_x start_POSTSUBSCRIPT italic_o italic_b italic_j , italic_i end_POSTSUBSCRIPT denotes the object features of i 𝑖 i italic_i-th seed point. The decoder g 𝒦 subscript 𝑔 𝒦 g_{\mathcal{K}}italic_g start_POSTSUBSCRIPT caligraphic_K end_POSTSUBSCRIPT can be any appropriate module in [Sec.4.3](https://arxiv.org/html/2206.07162#S4.Ex1 "4.3 Meta-Learner for Keypoint Detection ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"). In the vanilla version of our framework, it is given by a trivial MLP. However, we use a GNN for g 𝒦 subscript 𝑔 𝒦 g_{\mathcal{K}}italic_g start_POSTSUBSCRIPT caligraphic_K end_POSTSUBSCRIPT in our final version, the details of which will be given in [Sec.4.4](https://arxiv.org/html/2206.07162#S4.SS4 "4.4 Geometry-Aware Keypoint Decoder ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation").
|
| 113 |
+
|
| 114 |
+
Pose Fitting. Similar as in [[26](https://arxiv.org/html/2206.07162#bib.bib26)], we adopt MeanShift[[9](https://arxiv.org/html/2206.07162#bib.bib9)] to obtain the final keypoint prediction {p i*}i=1 M k superscript subscript superscript subscript 𝑝 𝑖 𝑖 1 subscript 𝑀 𝑘\{p_{i}^{*}\}_{i=1}^{M_{k}}{ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT in the camera coordinates, based on keypoint candidates output by the keypoint decoder. Given predefined 3D keypoints in object coordinates {p i}i=1 M k superscript subscript subscript 𝑝 𝑖 𝑖 1 subscript 𝑀 𝑘\{p_{i}\}_{i=1}^{M_{k}}{ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, 6D pose estimation can be converted into a least-squares fitting problem[[1](https://arxiv.org/html/2206.07162#bib.bib1)] where the optimized pose parameters [R;t]𝑅 𝑡[R;t][ italic_R ; italic_t ] are calculated by minimizing the squared loss using singular value decomposition(SVD):
|
| 115 |
+
|
| 116 |
+
L lsf=∑i=1 M k∥p i*−(R⋅p i+t)∥2.subscript 𝐿 𝑙 𝑠 𝑓 superscript subscript 𝑖 1 subscript 𝑀 𝑘 superscript delimited-∥∥superscript subscript 𝑝 𝑖⋅𝑅 subscript 𝑝 𝑖 𝑡 2\displaystyle L_{lsf}=\sum_{i=1}^{M_{k}}\left\lVert p_{i}^{*}-(R\cdot p_{i}+t)% \right\rVert^{2}.italic_L start_POSTSUBSCRIPT italic_l italic_s italic_f end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ∥ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT - ( italic_R ⋅ italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_t ) ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .(6)
|
| 117 |
+
|
| 118 |
+
### 4.4 Geometry-Aware Keypoint Decoder
|
| 119 |
+
|
| 120 |
+
Similar to prior methods[[27](https://arxiv.org/html/2206.07162#bib.bib27), [26](https://arxiv.org/html/2206.07162#bib.bib26)], we rely on predefined object keypoints for the final pose fitting. However, we also utilize them as an additional input to the keypoint decoder. Since they contain useful prior knowledge of the object’s geometric structure, they can significantly improve keypoint detection. In order to highlight the additional input to our decoder, we rewrite [Sec.4.3](https://arxiv.org/html/2206.07162#S4.Ex1 "4.3 Meta-Learner for Keypoint Detection ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") as follows:
|
| 121 |
+
|
| 122 |
+
y of,i u superscript subscript 𝑦 𝑜 𝑓 𝑖 𝑢\displaystyle y_{of,i}^{u}italic_y start_POSTSUBSCRIPT italic_o italic_f , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT=g 𝒦(x obj,i,z kp v,p v),v∈𝒩(u),formulae-sequence absent subscript 𝑔 𝒦 subscript 𝑥 𝑜 𝑏 𝑗 𝑖 superscript subscript 𝑧 𝑘 𝑝 𝑣 subscript 𝑝 𝑣 𝑣 𝒩 𝑢\displaystyle=g_{\mathcal{K}}(x_{obj,i},\,z_{kp}^{v},\,p_{v}),\ v\in\mathcal{N% }(u),= italic_g start_POSTSUBSCRIPT caligraphic_K end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_o italic_b italic_j , italic_i end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT , italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) , italic_v ∈ caligraphic_N ( italic_u ) ,(7)
|
| 123 |
+
|
| 124 |
+
where 𝒩(u)𝒩 𝑢\mathcal{N}(u)caligraphic_N ( italic_u ) denotes the neighbor set of keypoint u 𝑢 u italic_u including u 𝑢 u italic_u itself and p v subscript 𝑝 𝑣 p_{v}italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT are the 3D object coordinates of keypoint v 𝑣 v italic_v. To leverage the geometric information contained in the relation among the keypoints, we propose a GNN-based decoder g 𝒦 subscript 𝑔 𝒦 g_{\mathcal{K}}italic_g start_POSTSUBSCRIPT caligraphic_K end_POSTSUBSCRIPT instead of the trivial MLP in [Sec.4.3](https://arxiv.org/html/2206.07162#S4.Ex1 "4.3 Meta-Learner for Keypoint Detection ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"). For this purpose, we create a graph over the keypoints of each object. The nodes are given by the keypoints which share edges with their k 𝑘 k italic_k nearest neighbours. [Fig.4](https://arxiv.org/html/2206.07162#S4.F4 "Figure 4 ‣ 4.4 Geometry-Aware Keypoint Decoder ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") illustrates an example with k=3 𝑘 3 k=3 italic_k = 3.
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
|
| 128 |
+
Figure 4: Example of the graph generation. The node positions are determined by the predefined keypoints in object coordinates. By applying the K Nearest Neighbor (KNN) algorithm, we find the k closest adjacent nodes of each parent node and connect them by edges. For instance, the given graph is generated with k=3 𝑘 3 k=3 italic_k = 3, where the red node is selected as the parent node and green nodes are the three nearest neighbors. The driller is sampled from LineMOD.
|
| 129 |
+
|
| 130 |
+
Internally, [Eq.7](https://arxiv.org/html/2206.07162#S4.E7 "7 ‣ 4.4 Geometry-Aware Keypoint Decoder ‣ 4 Approach ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") is split into the following two steps involved in message passing along the graph:
|
| 131 |
+
|
| 132 |
+
α i u,v=f l(x obj,i⊕z kp v,p u−p v),∀v∈𝒩(u),formulae-sequence superscript subscript 𝛼 𝑖 𝑢 𝑣 superscript 𝑓 𝑙 direct-sum subscript 𝑥 𝑜 𝑏 𝑗 𝑖 superscript subscript 𝑧 𝑘 𝑝 𝑣 subscript 𝑝 𝑢 subscript 𝑝 𝑣 for-all 𝑣 𝒩 𝑢\alpha_{i}^{u,v}=f^{l}(x_{obj,i}\oplus z_{kp}^{v},p_{u}-p_{v}),\ \forall v\in% \mathcal{N}(u),italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u , italic_v end_POSTSUPERSCRIPT = italic_f start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ( italic_x start_POSTSUBSCRIPT italic_o italic_b italic_j , italic_i end_POSTSUBSCRIPT ⊕ italic_z start_POSTSUBSCRIPT italic_k italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT , italic_p start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT - italic_p start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) , ∀ italic_v ∈ caligraphic_N ( italic_u ) ,(8)
|
| 133 |
+
|
| 134 |
+
y of,i u=f g(max v∈𝒩(u)α i u,v).superscript subscript 𝑦 𝑜 𝑓 𝑖 𝑢 superscript 𝑓 𝑔 subscript 𝑣 𝒩 𝑢 superscript subscript 𝛼 𝑖 𝑢 𝑣 y_{of,i}^{u}=f^{g}(\max_{v\in\mathcal{N}(u)}\ \alpha_{i}^{u,v}).italic_y start_POSTSUBSCRIPT italic_o italic_f , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT = italic_f start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT ( roman_max start_POSTSUBSCRIPT italic_v ∈ caligraphic_N ( italic_u ) end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u , italic_v end_POSTSUPERSCRIPT ) .(9)
|
| 135 |
+
|
| 136 |
+
g 𝒦 subscript 𝑔 𝒦 g_{\mathcal{K}}italic_g start_POSTSUBSCRIPT caligraphic_K end_POSTSUBSCRIPT is correspondingly composed of two sub-networks, f l superscript 𝑓 𝑙 f^{l}italic_f start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and f g superscript 𝑓 𝑔 f^{g}italic_f start_POSTSUPERSCRIPT italic_g end_POSTSUPERSCRIPT. These correspond to updating the messages α i u,v superscript subscript 𝛼 𝑖 𝑢 𝑣\alpha_{i}^{u,v}italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u , italic_v end_POSTSUPERSCRIPT sent along all edges, aggregating the messages arriving at each node u 𝑢 u italic_u to update the corresponding node features and decoding them into keypoint offsets y of,i u superscript subscript 𝑦 𝑜 𝑓 𝑖 𝑢 y_{of,i}^{u}italic_y start_POSTSUBSCRIPT italic_o italic_f , italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT.
|
| 137 |
+
|
| 138 |
+
5 Experiments
|
| 139 |
+
-------------
|
| 140 |
+
|
| 141 |
+
### 5.1 Datasets
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
(a)Toy
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
(b)PBR
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
|
| 153 |
+
(c)Occlusion
|
| 154 |
+
|
| 155 |
+
Figure 5: Samples from MCMS dataset.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
Figure 6: Qualitative comparison between GNN and MLP decoder for keypoints prediction on PBR-MCMS. Triangles and circles are the projected ground-truth and predicted keypoints respectively. The keypoint predictions of the MLP decoder are randomly shifted without considering geometric constrains between keypoints. By contrast, the predictions by the GNN decoder are more accurate. The example of the motorcycle shows that though the keypoints predicted by the GNN are slightly shifted here, the geometric constraints are met, resulting in a uniform shift of all keypoints.
|
| 160 |
+
|
| 161 |
+
LineMOD. LineMOD[[29](https://arxiv.org/html/2206.07162#bib.bib29)] is a widely used dataset for 6D pose estimation which comprises 13 different objects in 13 scenes. Each scene contains multiple objects, but only one of them is annotated with a 6D pose and instance mask.
|
| 162 |
+
|
| 163 |
+
MCMS dataset. Due to the unavailability of datasets for cross-category level 6D pose estimation, we generate two fully-annotated synthetic datasets using objects from ShapeNet[[3](https://arxiv.org/html/2206.07162#bib.bib3)], which contain various objects from Multiple Categories in Multiple Scenes(MCMS). The simple version of MCMS, named Toy-MCMS, is composed of images containing a single object with backgrounds randomly sampled from the real-world image dataset SUN [[55](https://arxiv.org/html/2206.07162#bib.bib55)]. Our second dataset can be further divided into a non-occluded and an occluded version, called PBR-MCMS and Occlusion-MCMS. To create these datasets, we extend the open-source physics-based rendering (PBR) pipeline[[10](https://arxiv.org/html/2206.07162#bib.bib10)] with functionalities such as online truncation and occlusion checks. For each image, five objects are placed in a random scene with textured planes and varying lighting conditions. Images are then photographed with a rotating camera from a range of distances. PBR-MCMS contains images without occlusion while Occlusion-MCMS contains images with 5%−20%percent 5 percent 20 5\%-20\%5 % - 20 % occlusion of the queried object. [Fig.5](https://arxiv.org/html/2206.07162#S5.F5 "Figure 5 ‣ 5.1 Datasets ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows an example for each dataset using an object from the car category as the queried object.
|
| 164 |
+
|
| 165 |
+
### 5.2 Evaluation Metrics
|
| 166 |
+
|
| 167 |
+
We use the average distance metrics ADD[[29](https://arxiv.org/html/2206.07162#bib.bib29)] for evaluation. Given the predicted 6D pose [R;t]𝑅 𝑡[R;t][ italic_R ; italic_t ] and the ground-truth pose [R*;t*]superscript 𝑅 superscript 𝑡[R^{*};t^{*}][ italic_R start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ; italic_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ], the ADD metric is defined as:
|
| 168 |
+
|
| 169 |
+
ADD=1 m∑x∈𝒪∥(Rx+t)−(R*x+t*)∥,ADD 1 𝑚 subscript 𝑥 𝒪 delimited-∥∥𝑅 𝑥 𝑡 superscript 𝑅 𝑥 superscript 𝑡\displaystyle\mathrm{ADD}=\frac{1}{m}\sum_{x\in\mathcal{O}}\left\lVert(Rx+t)-(% R^{*}x+t^{*})\right\rVert,roman_ADD = divide start_ARG 1 end_ARG start_ARG italic_m end_ARG ∑ start_POSTSUBSCRIPT italic_x ∈ caligraphic_O end_POSTSUBSCRIPT ∥ ( italic_R italic_x + italic_t ) - ( italic_R start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT italic_x + italic_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) ∥ ,(10)
|
| 170 |
+
|
| 171 |
+
where 𝒪 𝒪\mathcal{O}caligraphic_O denotes the object mesh and m 𝑚 m italic_m is the total number of vertices on the object mesh. This metric calculates the mean distance between the two point sets transformed by predicted pose and ground-truth pose respectively. Similar to other works[[68](https://arxiv.org/html/2206.07162#bib.bib68), [45](https://arxiv.org/html/2206.07162#bib.bib45), [26](https://arxiv.org/html/2206.07162#bib.bib26)], we report the ADD-0.1d accuracy, which indicates the ratio of test samples, where the ADD is less than 10% of the object’s diameter.
|
| 172 |
+
|
| 173 |
+
### 5.3 Implementation and Training Details
|
| 174 |
+
|
| 175 |
+
For each object, we define 9 keypoints, where 8 keypoints are sampled from the 3D object model using FPS, and the other one is the object center. The nearest neighbors used for each keypoint is set to k=8 𝑘 8 k=8 italic_k = 8 in our geometry-aware decoder. To train the meta-learner, we use the Focal Loss[[38](https://arxiv.org/html/2206.07162#bib.bib38)] to supervise the segmentation module and a L1 loss for per-point translation offset prediction. The overall loss is weighted sum of both terms, with a weight 2.5 for segmentation and 1.0 for keypoint offsets. During training, for each iteration, we arbitrarily sample 18 objects and 12 images per object. The number of context images is randomly chosen between 2 and 8 per object while the remaining images are used as target set.
|
| 176 |
+
|
| 177 |
+
Training setup. For the LineMOD dataset, we use iron, lamp, and phone as novel objects for testing and the 10 remaining objects for training. Since LineMOD contains only a very limited number of objects, we only evaluate the keypoint offset prediction module using the ground-truth segmentation for selecting the points belonging to the queried object. For Toy- and PBR-MCMS, we use 20 and 19 categories for training respectively, with 30 objects per category and 50 images per object. During evaluation, 30 novel objects of each training category are tested for intra-categorical performance and 5 novel categories for cross-category performance. All experiments are conducted on NVIDIA V100-32GB GPU.
|
| 178 |
+
|
| 179 |
+
Table 1: Multi-category evaluation on Toy-MCMS dataset. Novel objects are marked with *.
|
| 180 |
+
|
| 181 |
+
| | Vanilla-ML | GAML |
|
| 182 |
+
| --- |
|
| 183 |
+
| Category | ADD | ADD |
|
| 184 |
+
| Airplane | 80.6 | 87.2 |
|
| 185 |
+
| Bench | 56.7 | 72.3 |
|
| 186 |
+
| Chair | 62.8 | 80.9 |
|
| 187 |
+
| Motorcycle | 92.6 | 94.7 |
|
| 188 |
+
| Washer | 85.4 | 91.4 |
|
| 189 |
+
| Bus* | 83.0 | 85.4 |
|
| 190 |
+
| Cap* | 46.6 | 54.2 |
|
| 191 |
+
| Laptop* | 18.8 | 48.8 |
|
| 192 |
+
| Piano* | 47.1 | 50.7 |
|
| 193 |
+
| Remote* | 53.5 | 56.1 |
|
| 194 |
+
| Intra-Categ. | 74.2 | 81.9 |
|
| 195 |
+
| Cross-Categ. | 50.3 | 59.0 |
|
| 196 |
+
| All | 69.4 | 77.2 |
|
| 197 |
+
|
| 198 |
+
| | FFB6D | Vanilla-ML | GAML |
|
| 199 |
+
| --- | --- |
|
| 200 |
+
| Category | PBR | PBR | Occ. | Δ Δ\Delta roman_Δ | PBR | Occ. | Δ Δ\Delta roman_Δ |
|
| 201 |
+
| Airplane | 9.1 | 90.4 | 43.2 | 47.2 | 89.8 | 46.6 | 43.2 |
|
| 202 |
+
| Bench | 2.9 | 62.1 | 40.4 | 21.7 | 69.8 | 49.0 | 20.8 |
|
| 203 |
+
| Chair | 1.1 | 80.0 | 54.4 | 25.6 | 80.0 | 55.6 | 22.4 |
|
| 204 |
+
| Motorcycle | 12.7 | 90.2 | 64.8 | 25.4 | 85.6 | 54.4 | 31.2 |
|
| 205 |
+
| Washer | 4.1 | 54.8 | 37.7 | 17.1 | 68.1 | 55.0 | 13.1 |
|
| 206 |
+
| Birdhouse* | 0.8 | 35.6 | 23.5 | 12.1 | 35.4 | 28.0 | 7.4 |
|
| 207 |
+
| Car* | 2.4 | 52.5 | 42.9 | 9.6 | 56.9 | 44.4 | 12.5 |
|
| 208 |
+
| Laptop* | 1.3 | 54.0 | 26.0 | 28.0 | 85.0 | 47.7 | 37.3 |
|
| 209 |
+
| Piano* | 2.0 | 45.8 | 27.3 | 18.5 | 45.8 | 32.5 | 13.3 |
|
| 210 |
+
| Sofa* | 2.6 | 68.1 | 45.4 | 22.7 | 69.8 | 57.9 | 11.9 |
|
| 211 |
+
| Intra-Categ. | 4.53 | 58.2 | 38.7 | 19.5 | 62.9 | 43.9 | 19.0 |
|
| 212 |
+
| Cross-Categ. | 1.81 | 51.2 | 33.0 | 18.2 | 58.6 | 42.1 | 16.5 |
|
| 213 |
+
| All | 3.96 | 56.7 | 37.6 | 19.1 | 62.0 | 43.5 | 18.5 |
|
| 214 |
+
|
| 215 |
+
Table 1: Multi-category evaluation on Toy-MCMS dataset. Novel objects are marked with *.
|
| 216 |
+
|
| 217 |
+
Table 2: Multi-category evaluation on PBR- and Occlusion-MCMS datasets. Δ Δ\Delta roman_Δ represents the performance gap between PBR- and Occlusion-MCMS.
|
| 218 |
+
|
| 219 |
+
Table 3: Evaluation results on LineMOD dataset.
|
| 220 |
+
|
| 221 |
+
### 5.4 Evaluation Results
|
| 222 |
+
|
| 223 |
+
We evaluate our approach using the LineMOD and MCMS datasets at intra- and cross-category levels. More quantitative and qualitative results are provided in [Appendix A](https://arxiv.org/html/2206.07162#A1 "Appendix A Evaluation Results ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation").
|
| 224 |
+
|
| 225 |
+
LineMOD.[Tab.3](https://arxiv.org/html/2206.07162#S5.T3 "Table 3 ‣ 5.3 Implementation and Training Details ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows training and test results following [[26](https://arxiv.org/html/2206.07162#bib.bib26)]. Note that the segmentation ground-truth is used for these results and we only evaluate the performance and generalization ability of the keypoint offset prediction module. Our model not only performs better on training objects, but also generalizes well to new objects even though it is trained on a limited number of objects and tested on new objects with large variations in appearance and geometry.
|
| 226 |
+
|
| 227 |
+
Toy- & PBR-MCMS.[Tab.2](https://arxiv.org/html/2206.07162#S5.T2 "Table 2 ‣ 5.3 Implementation and Training Details ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows test results on the Toy-MCMS dataset, which demonstrate that our proposed GNN decoder (GAML) consistently outperforms the Vanilla Meta Learner (Vanilla-ML) using classic MLP decoder on all categories. [Fig.6](https://arxiv.org/html/2206.07162#S5.F6 "Figure 6 ‣ 5.1 Datasets ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") visualizes some test examples for qualitative comparison. Next, we compare our meta-learner to FFB6D on the PBR dataset. [Tab.2](https://arxiv.org/html/2206.07162#S5.T2 "Table 2 ‣ 5.3 Implementation and Training Details ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows that our model generalizes well while FFB6D cannot directly transfer to novel objects. For a fair comparison, we further train FFB6D on the PBR dataset and fine-tune the pretrained model on each specific novel object with the same context images as given to GAML. [Tab.4](https://arxiv.org/html/2206.07162#S5.T4 "Table 4 ‣ 5.4 Evaluation Results ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows that our model still outperforms the fine-tuned FFB6D reliably and requires no trade-off between new and preceding tasks, whereas fine-tuning normally leads to a performance decrease on the previous tasks.
|
| 228 |
+
|
| 229 |
+
Table 4: Comparison between GAML and fine-tuned FFB6D on PBR-MCMS using ADD metric.
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+
Figure 7: Qualitative results on PBR- and Occlusion-MCMS datasets. Triangles and circles are the projections of ground-truth and predicted keypoints respectively. Note that our model is trained only on PBR-MCMS but shows robust performance on Occlusion-MCMS.
|
| 234 |
+
|
| 235 |
+
Occlusion-MCMS. Quantitative and qualitative results on Occlusion-MCMS are presented in [Tab.2](https://arxiv.org/html/2206.07162#S5.T2 "Table 2 ‣ 5.3 Implementation and Training Details ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") and [Fig.7](https://arxiv.org/html/2206.07162#S5.F7 "Figure 7 ‣ 5.4 Evaluation Results ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"). Strikingly, our approach achieves consistent and robust performance on occluded scenes even though training is conducted on non-occluded PBR-MCMS.
|
| 236 |
+
|
| 237 |
+
### 5.5 Ablation Study
|
| 238 |
+
|
| 239 |
+
Effect of K Neighbors in GNN. In [Tab.5](https://arxiv.org/html/2206.07162#S5.T5 "Table 5 ‣ 5.5 Ablation Study ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"), we study the effect of the k 𝑘 k italic_k neighbors in the GNN. We run tests using five seeds and calculate the mean. Compared to k=3 𝑘 3 k=3 italic_k = 3, using all keypoints as neighbors can improve the robustness. We find this to be more crucial when training on a single category with limited object variations, where involving all keypoints gives more expressive spatial representation. Full statistics are presented in [Sec.B.1](https://arxiv.org/html/2206.07162#A2.SS1 "B.1 Effect of K Neighbors in GNN ‣ Appendix B Ablation Study ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation").
|
| 240 |
+
|
| 241 |
+
Table 5: ADD Results on PBR-MCMS using different number k 𝑘 k italic_k of neighbors in GNN decoder.
|
| 242 |
+
|
| 243 |
+
Effect of the Aggregation Module in CNP. In our work, CNP uses max aggregation instead of mean as used in the original paper[[21](https://arxiv.org/html/2206.07162#bib.bib21)]. We further compare max aggregation with the cross-attention module proposed in Attentive Neural Processes (ANPs)[[32](https://arxiv.org/html/2206.07162#bib.bib32)] removing the self-attention part. The training curves (see [Sec.B.2](https://arxiv.org/html/2206.07162#A2.SS2 "B.2 Effect of Aggregation Module in CNP ‣ Appendix B Ablation Study ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation")) show that both methods achieve similar training performance, though ANP converges faster at the beginning. Nevertheless, [Tab.6](https://arxiv.org/html/2206.07162#S5.T6 "Table 6 ‣ 5.5 Ablation Study ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") illustrates that CNP generalizes slightly better to novel tasks on both intra- and cross-category levels.
|
| 244 |
+
|
| 245 |
+
Robustness to Occlusion. To further illustrate the benefits coming from the geometry-aware estimator, we compare GAML with Vanilla-ML. The results in [Tab.2](https://arxiv.org/html/2206.07162#S5.T2 "Table 2 ‣ 5.3 Implementation and Training Details ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") show that our purposed GNN decoder significantly improves the performance and robustness on occluded scenes.
|
| 246 |
+
|
| 247 |
+
Table 6: ADD Results of CNP and ANP on Toy dataset.
|
| 248 |
+
|
| 249 |
+
Limitations. We find two limitations of our method. First, we observe that in rare cases, our model suffers from Feature Ambiguity by struggling to disentangle feature variations, e.g.,textures, shapes and lighting conditions. Sometimes it can be fooled by two similar objects which results in inaccurate segmentation (see [Fig.7(a)](https://arxiv.org/html/2206.07162#S5.F7.sf1 "7(a) ‣ Figure 8 ‣ 5.5 Ablation Study ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation")). Second, keypoint-based approaches suffer from Symmetry Ambiguity, especially on novel objects where the symmetric axis is unknown. Consequently, keypoint predictions around the symmetric axis can be mismatched and hamper the training (see [Fig.7(b)](https://arxiv.org/html/2206.07162#S5.F7.sf2 "7(b) ‣ Figure 8 ‣ 5.5 Ablation Study ‣ 5 Experiments ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation")). Accounting for the symmetry ambiguity, we also provide evaluations with the ADD-S metric in [Sec.A.3](https://arxiv.org/html/2206.07162#A1.SS3 "A.3 PBR-MCMS Dataset ‣ Appendix A Evaluation Results ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") following prior work[[26](https://arxiv.org/html/2206.07162#bib.bib26), [63](https://arxiv.org/html/2206.07162#bib.bib63), [27](https://arxiv.org/html/2206.07162#bib.bib27)].
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+
(a)Feature ambiguity.
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
|
| 257 |
+
(b)Symmetry ambiguity.
|
| 258 |
+
|
| 259 |
+
Figure 8: Limitations of the proposed method.
|
| 260 |
+
|
| 261 |
+
6 Conclusion
|
| 262 |
+
------------
|
| 263 |
+
|
| 264 |
+
In this paper, we present a CNP-based meta-learner for cross-category level 6D pose estimation, which is capable of extracting and transferring latent representation on unseen objects from only a few samples. Besides, we propose a simple yet effective geometry-aware keypoint detection module using GNN, which leverages the spatial connections between keypoints and improves generalization on unseen objects and robustness on occluded scenes. Furthermore, we create fully-annotated synthetic datasets called MCMS with various objects and categories, aiming to fill the vacancy for cross-category pose estimation.
|
| 265 |
+
|
| 266 |
+
References
|
| 267 |
+
----------
|
| 268 |
+
|
| 269 |
+
* [1] K.S. Arun, T.S. Huang, and S.D. Blostein. Least-squares fitting of two 3-d point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-9(5):698–700, 1987.
|
| 270 |
+
* [2] Yanrui Bin, Zhao-Min Chen, Xiu-Shen Wei, Xinya Chen, Changxin Gao, and Nong Sang. Structure-aware human pose estimation with graph convolutional networks. Pattern Recognition, 106:107410, 05 2020.
|
| 271 |
+
* [3] Angel X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Qixing Huang, Zimo Li, S. Savarese, M. Savva, Shuran Song, Hao Su, J. Xiao, L. Yi, and F. Yu. Shapenet: An information-rich 3d model repository. ArXiv, abs/1512.03012, 2015.
|
| 272 |
+
* [4] Dengsheng Chen, Jun Li, Zheng Wang, and Kai Xu. Learning canonical shape space for category-level 6d object pose and size estimation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 273 |
+
* [5] Kai Chen and Qi Dou. Sgpa: Structure-guided prior adaptation for category-level 6d object pose estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2773–2782, October 2021.
|
| 274 |
+
* [6] Tung-I Chen, Yueh-Cheng Liu, Hung-Ting Su, Yu-Cheng Chang, Yu-Hsiang Lin, Jia-Fong Yeh, and Winston H. Hsu. Should I look at the head or the tail? dual-awareness attention for few-shot object detection. IEEE Trans. Multim., 23, 2021.
|
| 275 |
+
* [7] Wei Chen, Xi Jia, Hyung Jin Chang, Jinming Duan, Linlin Shen, and Ales Leonardis. Fs-net: Fast shape-based network for category-level 6d object pose estimation with decoupled rotation mechanism. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1581–1590, June 2021.
|
| 276 |
+
* [8] Xu Chen, Zijian Dong, Jie Song, Andreas Geiger, and Otmar Hilliges. Category level object pose estimation via neural analysis-by-synthesis. In European Conference on Computer Vision (ECCV), Cham, Aug. 2020. Springer International Publishing.
|
| 277 |
+
* [9] D. Comaniciu and P. Meer. Mean shift: a robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, 2002.
|
| 278 |
+
* [10] Maximilian Denninger, Martin Sundermeyer, Dominik Winkelbauer, Youssef Zidan, Dmitry Olefir, Mohamad Elbadrawy, Ahsan Lodhi, and Harinandan Katam. Blenderproc. CoRR, abs/1911.01911, 2019.
|
| 279 |
+
* [11] Guoguang Du, Kai Wang, and Shiguo Lian. Vision-based robotic grasping from object localization, pose estimation, grasp detection to motion planning: A review. CoRR, abs/1905.06658, 2019.
|
| 280 |
+
* [12] Qi Fan, Wei Zhuo, Chi-Keung Tang, and Yu-Wing Tai. Few-shot object detection with attention-rpn and multi-relation detector. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 281 |
+
* [13] Zhibo Fan, Yuchen Ma, Zeming Li, and Jian Sun. Generalized few-shot object detection without forgetting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4527–4536, June 2021.
|
| 282 |
+
* [14] Zhaoxin Fan, Yazhi Zhu, Yulin He, Qi Sun, Hongyan Liu, and Jun He. Deep learning on monocular object pose detection and tracking: A comprehensive overview. ArXiv, abs/2105.14291, 2021.
|
| 283 |
+
* [15] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135. PMLR, 06–11 Aug 2017.
|
| 284 |
+
* [16] Chelsea Finn, Aravind Rajeswaran, Sham Kakade, and Sergey Levine. Online meta-learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1920–1930. PMLR, 09–15 Jun 2019.
|
| 285 |
+
* [17] Kai Fischer, Martin Simon, Florian Olsner, Stefan Milz, Horst-Michael Gross, and Patrick Mader. Stickypillars: Robust and efficient feature matching on point clouds using graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 313–323, June 2021.
|
| 286 |
+
* [18] Martin A. Fischler and Robert C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24:381–395, 1981.
|
| 287 |
+
* [19] Ning Gao, Jingyu Zhang, Ruijie Chen, Ngo Anh Vien, Hanna Ziesche, and Gerhard Neumann. Meta-learning regrasping strategies for physical-agnostic objects. ArXiv, abs/2205.11110, 2023.
|
| 288 |
+
* [20] Ning Gao, Hanna Ziesche, Ngo Anh Vien, Michael Volpp, and Gerhard Neumann. What matters for meta-learning vision regression tasks? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14776–14786, June 2022.
|
| 289 |
+
* [21] Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Rezende, and S.M.Ali Eslami. Conditional neural processes. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1704–1713. PMLR, 10–15 Jul 2018.
|
| 290 |
+
* [22] Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S.M.Ali Eslami, and Yee Whye Teh. Neural processes. In ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018.
|
| 291 |
+
* [23] Zigang Geng, Ke Sun, Bin Xiao, Zhaoxiang Zhang, and Jingdong Wang. Bottom-up human pose estimation via disentangled keypoint regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14676–14686, June 2021.
|
| 292 |
+
* [24] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
|
| 293 |
+
* [25] Jonathan Gordon, Wessel P. Bruinsma, Andrew Y.K. Foong, James Requeima, Yann Dubois, and Richard E. Turner. Convolutional conditional neural processes. In International Conference on Learning Representations, 2020.
|
| 294 |
+
* [26] Yisheng He, Haibin Huang, Haoqiang Fan, Qifeng Chen, and Jian Sun. Ffb6d: A full flow bidirectional fusion network for 6d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3003–3013, June 2021.
|
| 295 |
+
* [27] Yisheng He, Wei Sun, Haibin Huang, Jianran Liu, Haoqiang Fan, and Jian Sun. Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 296 |
+
* [28] Stefan Hinterstoisser, Cedric Cagniart, Slobodan Ilic, Peter Sturm, Nassir Navab, Pascal Fua, and Vincent Lepetit. Gradient response maps for real-time detection of textureless objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(5):876–888, 2012.
|
| 297 |
+
* [29] Stefan Hinterstoisser, Vincent Lepetit, Slobodan Ilic, Stefan Holzer, Gary Bradski, Kurt Konolige, and Nassir Navab. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In Kyoung Mu Lee, Yasuyuki Matsushita, James M. Rehg, and Zhanyi Hu, editors, Computer Vision – ACCV 2012, pages 548–562, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
|
| 298 |
+
* [30] Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. Relation networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
|
| 299 |
+
* [31] Michael Kampffmeyer, Yinbo Chen, Xiaodan Liang, Hao Wang, Yujia Zhang, and Eric P. Xing. Rethinking knowledge graph propagation for zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 300 |
+
* [32] Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. Attentive neural processes. In International Conference on Learning Representations, 2019.
|
| 301 |
+
* [33] Loic Landrieu and Martin Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
|
| 302 |
+
* [34] Byung-Jun Lee, Seunghoon Hong, and Kee-Eung Kim. Residual neural processes. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):4545–4552, Apr. 2020.
|
| 303 |
+
* [35] Chung-Wei Lee, Wei Fang, Chih-Kuan Yeh, and Yu-Chiang Frank Wang. Multi-label zero-shot learning with structured knowledge graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
|
| 304 |
+
* [36] Xiang Li, Tianhan Wei, Yau Pun Chen, Yu-Wing Tai, and Chi-Keung Tang. Fss-1000: A 1000-class dataset for few-shot segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 305 |
+
* [37] Zhidong Liang, Ming Yang, Liuyuan Deng, Chunxiang Wang, and Bing Wang. Hierarchical depthwise graph convolutional neural network for 3d semantic segmentation of point clouds. In 2019 International Conference on Robotics and Automation (ICRA), pages 8152–8158, 2019.
|
| 306 |
+
* [38] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
|
| 307 |
+
* [39] Lu Liu, William L. Hamilton, Guodong Long, Jing Jiang, and Hugo Larochelle. A universal representation transformer layer for few-shot image classification. In International Conference on Learning Representations, 2021.
|
| 308 |
+
* [40] Jianwu Long, Zeran Yan, and Hongfa Chen. A graph neural network for superpixel image classification. Journal of Physics: Conference Series, 1871(1):012071, apr 2021.
|
| 309 |
+
* [41] Christos Louizos, Xiahan Shi, Klamer Schutte, and Max Welling. The functional neural process. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
|
| 310 |
+
* [42] Mateusz Michalkiewicz, Sarah Parisot, Stavros Tsogkas, Mahsa Baktashmotlagh, Anders P. Eriksson, and Eugene Belilovsky. Few-shot single-view 3-d object reconstruction with compositional priors. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXV, volume 12370 of Lecture Notes in Computer Science, pages 614–630. Springer, 2020.
|
| 311 |
+
* [43] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. CoRR, abs/1803.02999, 2018.
|
| 312 |
+
* [44] Ayyappa Kumar Pambala, Titir Dutta, and Soma Biswas. Sml: Semantic meta-learning for few-shot semantic segmentation. Pattern Recognition Letters, 147:93–99, 2021.
|
| 313 |
+
* [45] Sida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, and Hujun Bao. Pvnet: Pixel-wise voting network for 6dof pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 314 |
+
* [46] Juan-Manuel Perez-Rua, Xiatian Zhu, Timothy M. Hospedales, and Tao Xiang. Incremental few-shot object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 315 |
+
* [47] Quang-Hieu Pham, Mikaela Angelina Uy, Binh-Son Hua, Duc Thanh Nguyen, Gemma Roig, and Sai-Kit Yeung. LCD: Learned cross-domain descriptors for 2D-3D matching. In the AAAI Conference on Artificial Intelligence, 2020.
|
| 316 |
+
* [48] Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
|
| 317 |
+
* [49] Xiaojuan Qi, Renjie Liao, Jiaya Jia, Sanja Fidler, and Raquel Urtasun. 3d graph neural networks for rgbd semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
|
| 318 |
+
* [50] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning(ICML), pages 1842–1850, 2016.
|
| 319 |
+
* [51] Weijing Shi and Raj Rajkumar. Point-gnn: Graph neural network for 3d object detection in a point cloud. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 320 |
+
* [52] Mennatullah Siam, Boris N. Oreshkin, and Martin Jagersand. Amp: Adaptive masked proxies for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
|
| 321 |
+
* [53] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In I. Guyon, U.V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
|
| 322 |
+
* [54] Chen Song, Jiaru Song, and Qixing Huang. Hybridpose: 6d object pose estimation under hybrid representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 323 |
+
* [55] Shuran Song, Fisher Yu, Andy Zeng, Angel X. Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
|
| 324 |
+
* [56] Martin Sundermeyer, Zoltan-Csaba Marton, Maximilian Durner, Manuel Brucker, and Rudolph Triebel. Implicit 3d orientation learning for 6d object detection from rgb images. In The European Conference on Computer Vision (ECCV), September 2018.
|
| 325 |
+
* [57] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
|
| 326 |
+
* [58] Meng Tian, Marcelo H. Ang, and Gim Hee Lee. Shape prior deformation for categorical 6d object pose and size estimation. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, The European Conference on Computer Vision (ECCV), pages 530–546, Cham, 2020. Springer International Publishing.
|
| 327 |
+
* [59] Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, and Ming-Hsuan Yang. Cross-domain few-shot classification via learned feature-wise transformation. In International Conference on Learning Representations, 2020.
|
| 328 |
+
* [60] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, koray kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
|
| 329 |
+
* [61] Bram Wallace and Bharath Hariharan. Few-shot generalization for single-image 3d reconstruction via priors. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
|
| 330 |
+
* [62] Chen Wang, Roberto Martín-Martín, Danfei Xu, Jun Lv, Cewu Lu, Li Fei-Fei, Silvio Savarese, and Yuke Zhu. 6-pack: Category-level 6d pose tracker with anchor-based keypoints. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 10059–10066, 2020.
|
| 331 |
+
* [63] Chen Wang, Danfei Xu, Yuke Zhu, Roberto Martin-Martin, Cewu Lu, Li Fei-Fei, and Silvio Savarese. Densefusion: 6d object pose estimation by iterative dense fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 332 |
+
* [64] He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J. Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 333 |
+
* [65] Jian Wang, Xiang Long, Yuan Gao, Errui Ding, and Shilei Wen. Graph-pcnn: Two stage human pose estimation with graph pose refinement. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision – ECCV 2020, pages 492–508, Cham, 2020. Springer International Publishing.
|
| 334 |
+
* [66] Lei Wang, Yuchun Huang, Yaolin Hou, Shenman Zhang, and Jie Shan. Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 335 |
+
* [67] Yongxin Wang, Kris Kitani, and Xinshuo Weng. Joint object detection and multi-object tracking with graph neural networks. In Proceedings of (ICRA) International Conference on Robotics and Automation, May 2021.
|
| 336 |
+
* [68] Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. In Robotics: Science and Systems (RSS), 2018.
|
| 337 |
+
* [69] Yiding Yang, Zhou Ren, Haoxiang Li, Chunluan Zhou, Xinchao Wang, and Gang Hua. Learning dynamics via graph neural networks for human pose estimation and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8074–8084, June 2021.
|
| 338 |
+
* [70] Sergey Zakharov, Ivan Shugurov, and Slobodan Ilic. Dpod: 6d pose object detector and refiner. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
|
| 339 |
+
* [71] Chi Zhang, Yujun Cai, Guosheng Lin, and Chunhua Shen. Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 340 |
+
* [72] Lu Zhang, Shuigeng Zhou, Jihong Guan, and Ji Zhang. Accurate few-shot object detection with support-query mutual guidance and hybrid loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14424–14432, June 2021.
|
| 341 |
+
* [73] Penghao Zhang, Jiayue Li, Yining Wang, and Judong Pan. Domain adaptation for medical image segmentation: A meta-learning method. Journal of Imaging, 7(2), 2021.
|
| 342 |
+
|
| 343 |
+
Appendix A Evaluation Results
|
| 344 |
+
-----------------------------
|
| 345 |
+
|
| 346 |
+
### A.1 LineMOD Dataset
|
| 347 |
+
|
| 348 |
+
The LineMOD dataset[[29](https://arxiv.org/html/2206.07162#bib.bib29)] is split into 10 training objects and 3 unseen test objects, where iron, lamp and phone are the novel test objects. [Fig.2](https://arxiv.org/html/2206.07162#A3.F2 "Figure 2 ‣ Appendix C Network Architecture ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") show the qualitative comparison between FFB6D[[26](https://arxiv.org/html/2206.07162#bib.bib26)] and the proposed model on training objects. It can be observed that our model can predict keypoints more accurately. From [Fig.3](https://arxiv.org/html/2206.07162#A3.F3 "Figure 3 ‣ Appendix C Network Architecture ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"), we can see that our model achieves better performance on novel objects. It should be noted that we only train one model for all objects, rather than train one model for each object respectively.
|
| 349 |
+
|
| 350 |
+
### A.2 Toy-MCMS Dataset
|
| 351 |
+
|
| 352 |
+
[Tab.1](https://arxiv.org/html/2206.07162#A1.T1 "Table 1 ‣ A.2 Toy-MCMS Dataset ‣ Appendix A Evaluation Results ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") provides the quantitative results of inter-category 6D pose estimation on the car category. We use 50 images per object for training and vary the number of training objects. From the experimental results, 80 car objects can achieve a similar ADD accuracy as 1100 objects, while the training time is reduced evidently. Overall, this represents a good comprise between prediction performance and training overhead. [Fig.4](https://arxiv.org/html/2206.07162#A3.F4 "Figure 4 ‣ Appendix C Network Architecture ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows the qualitative results on novel test objects using the model trained with 80 objects. Note that even within the car category, the colors and shapes of novel objects still vary a lot.
|
| 353 |
+
|
| 354 |
+
Table 1: Single category - car evaluation on Toy-MCMS dataset
|
| 355 |
+
|
| 356 |
+
[Tab.2](https://arxiv.org/html/2206.07162#A1.T2 "Table 2 ‣ A.2 Toy-MCMS Dataset ‣ Appendix A Evaluation Results ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows the quantitative results of the multi-category evaluation on the Toy-MCMS dataset. The vanilla meta-learner (Vanilla-ML) using MLP decoder is compared with the proposed geometry-aware meta-learner (GAML). It is obvious that GAML outperforms the Vanilla-ML by a large margin.
|
| 357 |
+
|
| 358 |
+
Table 2: Multi-category evaluation on Toy-MCMS dataset
|
| 359 |
+
|
| 360 |
+
### A.3 PBR-MCMS Dataset
|
| 361 |
+
|
| 362 |
+
We compare FFB6D, Vanilla-ML and GAML on intra- and cross-category levels. The full statistical summary can be found in [Tab.6](https://arxiv.org/html/2206.07162#A3.T6 "Table 6 ‣ Appendix C Network Architecture ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"). In general, the ADD metric is used for non-symmetric objects and ADD-S[[29](https://arxiv.org/html/2206.07162#bib.bib29)] for symmetric objects. Since the matching between points is ambiguous for some poses, ADD-S computes the mean distance based on the minimum point distance:
|
| 363 |
+
|
| 364 |
+
ADD−S=1 m∑x 1∈𝒪 min x 2∈𝒪∥(Rx+t)−(R*x+t*)∥.ADD S 1 𝑚 subscript subscript 𝑥 1 𝒪 subscript subscript 𝑥 2 𝒪 𝑅 𝑥 𝑡 superscript 𝑅 𝑥 superscript 𝑡\displaystyle\mathrm{ADD-S}=\frac{1}{m}\sum_{x_{1}\in\mathcal{O}}\min_{x_{2}% \in\mathcal{O}}\left\lVert(Rx+t)-(R^{*}x+t^{*})\right\rVert.roman_ADD - roman_S = divide start_ARG 1 end_ARG start_ARG italic_m end_ARG ∑ start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ caligraphic_O end_POSTSUBSCRIPT roman_min start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ caligraphic_O end_POSTSUBSCRIPT ∥ ( italic_R italic_x + italic_t ) - ( italic_R start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT italic_x + italic_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) ∥ .(1)
|
| 365 |
+
|
| 366 |
+
### A.4 Occlusion-MCMS Dataset
|
| 367 |
+
|
| 368 |
+
Comparison between Vanilla-ML and GAML on Occlusion-MCMS is given in [Tab.3](https://arxiv.org/html/2206.07162#A1.T3 "Table 3 ‣ A.4 Occlusion-MCMS Dataset ‣ Appendix A Evaluation Results ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation").
|
| 369 |
+
|
| 370 |
+
Table 3: Multi-category evaluation on Occlusion-MCMS dataset
|
| 371 |
+
|
| 372 |
+
Appendix B Ablation Study
|
| 373 |
+
-------------------------
|
| 374 |
+
|
| 375 |
+
### B.1 Effect of K Neighbors in GNN
|
| 376 |
+
|
| 377 |
+
We measure the ADD-0.1d accuracy of multi-category and single-category training with k=3 𝑘 3 k=3 italic_k = 3 and k=8 𝑘 8 k=8 italic_k = 8 in the GNN decoder. [Tab.4](https://arxiv.org/html/2206.07162#A2.T4 "Table 4 ‣ B.1 Effect of K Neighbors in GNN ‣ Appendix B Ablation Study ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") presents the quantitative results.
|
| 378 |
+
|
| 379 |
+
Table 4: Effect of K 𝐾 K italic_K neighbors in GNN decoder on PBR-MCMS. The first block provides the statistic for multi-category training, where one model is trained on multiple categories and tested on new objects of both training and new categories. For single category training, each model is trained and tested per category.
|
| 380 |
+
|
| 381 |
+
### B.2 Effect of Aggregation Module in CNP
|
| 382 |
+
|
| 383 |
+
[Fig.1](https://arxiv.org/html/2206.07162#A2.F1 "Figure 1 ‣ B.2 Effect of Aggregation Module in CNP ‣ Appendix B Ablation Study ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation") shows that both CNP and ANP methods achieve similar training performance in the end, even though ANP converges faster at the beginning.
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+
Figure 1: Comparison of training loss between CNP and ANP
|
| 388 |
+
|
| 389 |
+
Appendix C Network Architecture
|
| 390 |
+
-------------------------------
|
| 391 |
+
|
| 392 |
+
The detailed architecture model is shown in [Tab.5](https://arxiv.org/html/2206.07162#A3.T5 "Table 5 ‣ Appendix C Network Architecture ‣ GAML: Geometry-Aware Meta-Learner for Cross-Category 6D Pose Estimation"). We use ReLU as activation function after each FC layer except the output layer of segmentation decoder and global GNN decoder for keypoint offset prediction.
|
| 393 |
+
|
| 394 |
+
Table 5: GAML network architecture.
|
| 395 |
+
|
| 396 |
+
Table 6: Multi-category evaluation on PBR-MCMS dataset
|
| 397 |
+
|
| 398 |
+

|
| 399 |
+
|
| 400 |
+
Figure 2: Qualitative comparison on trained LineMOD objects. Triangles and circles are the projections of ground-truth and predicted keypoints respectively. It can be observed that keypoint predictions of our method are more accurate.
|
| 401 |
+
|
| 402 |
+

|
| 403 |
+
|
| 404 |
+
Figure 3: Qualitative comparison on new LineMOD objects. Compared with FFB6D, the pose estimation on new objects of our GAML model is more accurate.
|
| 405 |
+
|
| 406 |
+

|
| 407 |
+
|
| 408 |
+
Figure 4: Qualitative results on Toy-MCMS. Our model can handle large intra-category variations. The car category is illustrated as an example.
|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
|
| 412 |
+
Figure 5: Qualitative comparison between GNN and MLP decoder on Occlusion-MCMS. Triangles and circles are the projected ground-truth and predicted keypoints respectively.
|