mishig HF Staff commited on
Commit
35b8116
·
verified ·
1 Parent(s): bde6ed4

Add 1 files

Browse files
Files changed (1) hide show
  1. 2402/2402.18699.md +269 -0
2402/2402.18699.md ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise
2
+
3
+ URL Source: https://arxiv.org/html/2402.18699
4
+
5
+ Published Time: Fri, 08 Mar 2024 01:28:01 GMT
6
+
7
+ Markdown Content:
8
+ Suhan Ling*, Yian Wang*, Shiguang Wu, Yuzheng Zhuang, Tianyi Xu, Yu Li, Chang Liu, Hao Dong Suhan Ling, Yian Wang, Tianyi Xu, Yu Li, Chang Liu, Hao Dong are with Hyperplane Lab, School of CS, Peking University and National Key Laboratory for Multimedia Information Processing. Shiguang Wu and Yuzheng Zhuang are with Huawei.* indicates equal contributionCorresponding to hao.dong@pku.edu.cn
9
+
10
+ ###### Abstract
11
+
12
+ 3D articulated objects are inherently challenging for manipulation due to the varied geometries and intricate functionalities associated with articulated objects. Point-level affordance, which predicts the per-point actionable score and thus proposes the best point to interact with, has demonstrated excellent performance and generalization capabilities in articulated object manipulation. However, a significant challenge remains: while previous works use perfect point cloud generated in simulation, the models cannot directly apply to the noisy point cloud in the real-world. To tackle this challenge, we leverage the property of real-world scanned point cloud that, the point cloud becomes less noisy when the camera is closer to the object. Therefore, we propose a novel coarse-to-fine affordance learning pipeline to mitigate the effect of point cloud noise in two stages. In the first stage, we learn the affordance on the noisy far point cloud which includes the whole object to propose the approximated place to manipulate. Then, we move the camera in front of the approximated place, scan a less noisy point cloud containing precise local geometries for manipulation, and learn affordance on such point cloud to propose fine-grained final actions. The proposed method is thoroughly evaluated both using large-scale simulated noisy point clouds mimicking real-world scans, and in the real world scenarios, with superiority over existing methods, demonstrating the effectiveness in tackling the noisy real-world point cloud problem.
13
+
14
+ I INTRODUCTION
15
+ --------------
16
+
17
+ It is essential for next-generation robots to effectively assist humans and achieve precise interactions with common 3D articulated objects like cabinets and drawers. Unlike humans, robots lack innate abilities to understand part semantics, which makes it challenging to interact with highly articulated objects.
18
+
19
+ Recent research has ventured into the realm of fine-grained manipulation affordance analysis based on 3D geometric inputs[[1](https://arxiv.org/html/2402.18699v2#bib.bib1), [2](https://arxiv.org/html/2402.18699v2#bib.bib2), [3](https://arxiv.org/html/2402.18699v2#bib.bib3), [4](https://arxiv.org/html/2402.18699v2#bib.bib4)]. Notably, point-level affordance, which focuses on the geometric information of object local parts, and represents the per-point actionable information for manipulating diverse kinds of objects, has demonstrated its excellent performance and generalization abilities in various downstream tasks, including articulated object manipulation[[5](https://arxiv.org/html/2402.18699v2#bib.bib5), [6](https://arxiv.org/html/2402.18699v2#bib.bib6), [7](https://arxiv.org/html/2402.18699v2#bib.bib7)], bimanual collaboration[[8](https://arxiv.org/html/2402.18699v2#bib.bib8)], environment-aware manipulation[[9](https://arxiv.org/html/2402.18699v2#bib.bib9)] and even deformable object manipulation[[10](https://arxiv.org/html/2402.18699v2#bib.bib10)].
20
+
21
+ ![Image 1: Refer to caption](https://arxiv.org/html/2402.18699v2/x1.png)
22
+
23
+ Figure 1: Our proposed coarse-to-fine affordance learning framework articulated object manipulation with real-world noisy observations.
24
+
25
+ However, when it comes to the practical application of manipulation in the real world, the above affordance learning approaches encounter the challenge, known as the sim-to-real gap. To be specific, policies may perform admirably in simulators that generate large-scale perfect point cloud, but when directly deploying on the real-world scanned noisy point cloud, the policies trained in simulation will easily fail. The performance degradation from simulation to real world mainly comes from two reasons. First, the model trained only on perfect point cloud in simulation does not have the robustness to the out-of-distribution noisy point cloud in the real world. Second, the important geometries for manipulation, such as handles and edges may face heavy distortions and even disappear, cannot be reflected from the noisy point cloud, and thus the noisy point cloud cannot provide valid information for fine-grained manipulation.
26
+
27
+ To mitigate the effect of point-cloud noise for manipulation, this paper leverages the property that, the extent of noise will increase with the distance increase between the camera and the target object, and introduces a coarse-to-fine affordance learning framework. As is shown in Fig.[1](https://arxiv.org/html/2402.18699v2#S1.F1 "Figure 1 ‣ I INTRODUCTION ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise"), in the coarse stage, a coarse noisy point cloud in the far view is taken and the affordance prediction guides the approximated area to manipulate. Even though the significant local geometries such as handles are distorted or even disappear, the coarse affordance can estimate the rough positions to manipulate when taking the point cloud covering the whole shape as input. In the next fine stage, we move the camera mounted on the gripper to the front of the estimated manipulation area guided by the coarse affordance, and take a point cloud of the local area. The fine point cloud faces much slighter noise problem, and preserves the geometries of the target objects for manipulation. Besides, only taking the fine point cloud to propose actions possesses a limitation: it disregards global geometric context. For example, when opening a door, the fine point cloud indicates where to manipulate, but does not contain information about the action direction, as the door axis will not be included in the point cloud in the near view. To address this problem, we introduce an integration in learning the affordance and proposing actions using a coarse-to-fine framework, by integrating the features of the coarse point cloud to the fine point cloud.
28
+
29
+ In summary, our main contributions are the following.
30
+
31
+ * •We propose a novel affordance framework that utilizes an eye-on-hand camera to obtain closer views tailored to the requirements of manipulation tasks. This approach effectively addresses the challenges posed by the noise present in the point cloud data.
32
+ * •We adapt PointNet++[[11](https://arxiv.org/html/2402.18699v2#bib.bib11)] to concurrently encode point cloud data captured by both the far and near view of the eye-on-hand camera.
33
+ * •Experiments on noisy point clouds demonstrate our method can retain detailed geometry information from the closer point cloud and preserve global shape information from the farther.
34
+
35
+ II RELATED WORK
36
+ ---------------
37
+
38
+ ### II-A 3D Articulated Object Manipulation
39
+
40
+ Diverging from conventional 3D objects, articulated objects present intricacies in their kinematic properties attributed to varying joint types and joint limits. Consequently, a multitude of scholarly endeavors has arisen to delve into comprehending the structural dynamics of 3D articulated objects [[12](https://arxiv.org/html/2402.18699v2#bib.bib12), [13](https://arxiv.org/html/2402.18699v2#bib.bib13), [14](https://arxiv.org/html/2402.18699v2#bib.bib14), [15](https://arxiv.org/html/2402.18699v2#bib.bib15)] and how to manipulate them [[5](https://arxiv.org/html/2402.18699v2#bib.bib5), [6](https://arxiv.org/html/2402.18699v2#bib.bib6), [7](https://arxiv.org/html/2402.18699v2#bib.bib7), [16](https://arxiv.org/html/2402.18699v2#bib.bib16)]. Comparing with grasping problems [[17](https://arxiv.org/html/2402.18699v2#bib.bib17), [18](https://arxiv.org/html/2402.18699v2#bib.bib18), [19](https://arxiv.org/html/2402.18699v2#bib.bib19), [20](https://arxiv.org/html/2402.18699v2#bib.bib20), [21](https://arxiv.org/html/2402.18699v2#bib.bib21), [22](https://arxiv.org/html/2402.18699v2#bib.bib22), [23](https://arxiv.org/html/2402.18699v2#bib.bib23), [24](https://arxiv.org/html/2402.18699v2#bib.bib24)], manipulating 3D articulated object requires not only the detailed grasping pose but also how to execute interactions after grasping the correct point (_e.g._, when pulling open a door, the interaction trajectory should be an arc). Recent studies[[6](https://arxiv.org/html/2402.18699v2#bib.bib6), [16](https://arxiv.org/html/2402.18699v2#bib.bib16), [25](https://arxiv.org/html/2402.18699v2#bib.bib25), [26](https://arxiv.org/html/2402.18699v2#bib.bib26), [27](https://arxiv.org/html/2402.18699v2#bib.bib27)] have explored the execution of sequential interactions to accomplish tasks, leveraging either known joint information or alternative approaches. While these investigations have demonstrated remarkable achievements in manipulating 3D articulated objects within simulated environments, their direct applicability to real-world scenarios remains challenging due to the significant sim-to-real gap.
41
+
42
+ ### II-B Visual Affordance for 3D Shapes
43
+
44
+ Affordance, initially proposed by Gibson[[28](https://arxiv.org/html/2402.18699v2#bib.bib28)], serves as a representation that conveys the interactive properties inherent in a scene[[29](https://arxiv.org/html/2402.18699v2#bib.bib29), [30](https://arxiv.org/html/2402.18699v2#bib.bib30)] or object[[31](https://arxiv.org/html/2402.18699v2#bib.bib31), [32](https://arxiv.org/html/2402.18699v2#bib.bib32)] and has been successfully utilized in various studies[[1](https://arxiv.org/html/2402.18699v2#bib.bib1), [2](https://arxiv.org/html/2402.18699v2#bib.bib2), [33](https://arxiv.org/html/2402.18699v2#bib.bib33), [3](https://arxiv.org/html/2402.18699v2#bib.bib3)]. In the context of manipulation tasks, the incorporation of visual affordance as an intermediate outcome can enhance the interpretability of the manipulation policies[[7](https://arxiv.org/html/2402.18699v2#bib.bib7), [34](https://arxiv.org/html/2402.18699v2#bib.bib34), [35](https://arxiv.org/html/2402.18699v2#bib.bib35), [9](https://arxiv.org/html/2402.18699v2#bib.bib9)]. Notably, the visual affordance heatmap provides a readily accessible means of visualizing the resulting affordance.
45
+
46
+ III Problem Formulation
47
+ -----------------------
48
+
49
+ Through observing the object from a relatively far view, our framework obtains a 3D partial point cloud O f⁢a⁢r∈ℝ N×3 superscript 𝑂 𝑓 𝑎 𝑟 superscript ℝ 𝑁 3 O^{far}\in\mathbb{R}^{N\times 3}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × 3 end_POSTSUPERSCRIPT as input. Then our framework predicts an affordance score a p∈[0,1]subscript 𝑎 𝑝 0 1 a_{p}\in[0,1]italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∈ [ 0 , 1 ] for each single point p∈ℝ 3 𝑝 superscript ℝ 3 p\in\mathbb{R}^{3}italic_p ∈ blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT in O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT, indicating the value of taking a closer view near this point, and choose the point with the highest score p f⁢a⁢r=a⁢r⁢g⁢m⁢a⁢x p⁢a p subscript 𝑝 𝑓 𝑎 𝑟 𝑎 𝑟 𝑔 𝑚 𝑎 subscript 𝑥 𝑝 subscript 𝑎 𝑝 p_{far}=argmax_{p}a_{p}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT = italic_a italic_r italic_g italic_m italic_a italic_x start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT to take a closer look at. After moving the camera closer to p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT, we scan another partial point cloud O n⁢e⁢a⁢r∈ℝ N×3 superscript 𝑂 𝑛 𝑒 𝑎 𝑟 superscript ℝ 𝑁 3 O^{near}\in\mathbb{R}^{N\times 3}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × 3 end_POSTSUPERSCRIPT on this view.
50
+
51
+ Taking those two frames of point cloud as input, for each point p∈O n⁢e⁢a⁢r 𝑝 superscript 𝑂 𝑛 𝑒 𝑎 𝑟 p\in O^{near}italic_p ∈ italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT our framework proposes a set of action proposals A⁢C⁢T p={a⁢c⁢t 1,a⁢c⁢t 2,…}𝐴 𝐶 subscript 𝑇 𝑝 𝑎 𝑐 subscript 𝑡 1 𝑎 𝑐 subscript 𝑡 2…ACT_{p}=\{act_{1},act_{2},...\}italic_A italic_C italic_T start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = { italic_a italic_c italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_a italic_c italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … } (_i.e._, different action orientations on the point p 𝑝 p italic_p), namely a⁢c⁢t i=(p,R i)𝑎 𝑐 subscript 𝑡 𝑖 𝑝 subscript 𝑅 𝑖 act_{i}=(p,R_{i})italic_a italic_c italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ( italic_p , italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) where p 𝑝 p italic_p is the 3D point position and R i subscript 𝑅 𝑖 R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the 6D gripper orientation. Here we generally follow the action definition of Where2act[[5](https://arxiv.org/html/2402.18699v2#bib.bib5)] by defining a task.
52
+
53
+ Finally, taking as input the two frames of point cloud O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT and O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT and the proposed action sets, our framework predicts the possibility of each action to successfully move the 3D articulation object and select the action with the highest score to execute. Specifically, we name this possibility as c a⁢c⁢t∈[0,1]subscript 𝑐 𝑎 𝑐 𝑡 0 1 c_{act}\in[0,1]italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT ∈ [ 0 , 1 ].
54
+
55
+ IV Method
56
+ ---------
57
+
58
+ ![Image 2: Refer to caption](https://arxiv.org/html/2402.18699v2/x2.png)
59
+
60
+ Figure 2: Framework overview. Given the noisy point cloud in the far view as observation, our framework extracts per-point features using PointNet++ containing multi-scale pointnet, and then predicts a per-point coarse affordance map. We move the camera to the front of the point with the highest coarse affordance score, and take a less noisy point cloud. The framework uses another PointNet++ to extract per-point features of the fine point cloud, with the integration of the features of the far point cloud. The predicted affordance proposes the fine-grained actions.
61
+
62
+ As shown in Fig[2](https://arxiv.org/html/2402.18699v2#S4.F2 "Figure 2 ‣ IV Method ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise"), in general, our framework consists of three functional modules: 1) the Coarse module takes only the far observation O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT as input and predicts a coarse affordance score a p subscript 𝑎 𝑝 a_{p}italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT for each point p∈O f⁢a⁢r 𝑝 superscript 𝑂 𝑓 𝑎 𝑟 p\in O^{far}italic_p ∈ italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT. 2) the Fine module takes the near observation O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT and the action set A⁢C⁢T p 𝐴 𝐶 subscript 𝑇 𝑝 ACT_{p}italic_A italic_C italic_T start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT for each p∈O n⁢e⁢a⁢r 𝑝 superscript 𝑂 𝑛 𝑒 𝑎 𝑟 p\in O^{near}italic_p ∈ italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT as input and predicts the feasibility c a⁢c⁢t subscript 𝑐 𝑎 𝑐 𝑡 c_{act}italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT of each action. Lastly, 3) the Actor takes both O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT and O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT as input and outputs an action set for each point in O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT for manipulating the object.
63
+
64
+ To concurrently utilize both views of point cloud data, we make a special design in PointNet++ network[[11](https://arxiv.org/html/2402.18699v2#bib.bib11)] to make an integration among our Coarse module and Fine module so the Fine module can perceive global (_i.e._, far view) and local (_i.e._, near view) information as a whole. Below, we expound upon the intricate facets of each element within our pipeline.
65
+
66
+ ### IV-A Far-view Coarse Affordance
67
+
68
+ Despite the fact that the far view tends to receive much noise, it also captures crucial global information for selecting a nearer viewing position for closer inspection of objects. Therefore, a stationary far camera captures an object’s comprehensive view and geometric overview. This information aids in assessing point affordance (_i.e._, a p subscript 𝑎 𝑝 a_{p}italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT) and selecting the focal point for zooming in, our point of interest.
69
+
70
+ Given the far view point cloud of the object, we utilize PointNet++ to regress a p subscript 𝑎 𝑝 a_{p}italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT for each point in O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT. The challenge lies in determining the ground-truth a p g⁢t superscript subscript 𝑎 𝑝 𝑔 𝑡 a_{p}^{gt}italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT for each point, since directly defining the ground-truth score as the success rate after taking this view is inadequate, as a result of the considerable object visibility in the closer view.
71
+
72
+ Assuming flawless Actor and Fine modules, interaction success hinges on the presence of an actionable point in the closer view, rendering local geometry inconsequential. Consequently, through trial-and-errors in simulated environment we set a p g⁢t=1 superscript subscript 𝑎 𝑝 𝑔 𝑡 1 a_{p}^{gt}=1 italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT = 1 if an interaction succeeds in the middle of the closer view, and a p g⁢t=0 superscript subscript 𝑎 𝑝 𝑔 𝑡 0 a_{p}^{gt}=0 italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT = 0 otherwise. This approach enhances the influence of local geometry on the predicted scores in the far view.
73
+
74
+ ### IV-B Near-view Fine Affordance
75
+
76
+ Although the far view generates a considerable amount of noise and is inadequate for accurate manipulation, it provides us a point of interest (_i.e._, the zoom-in point). In order to make the best of the global information provided by the far view while minimizing camera noise, the eye-on-hand camera is subsequently positioned at the point of interest, providing a near view of the target object. The near view point cloud data (_i.e._, O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT) acquired by such a camera is part of the input of our Fine module, which functions as a critic network, assessing the feasibility of proposed actions of the Actor module. Given O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT as input, it employs PointNet++ to encode geometric attributes for each point p∈O n⁢e⁢a⁢r 𝑝 superscript 𝑂 𝑛 𝑒 𝑎 𝑟 p\in O^{near}italic_p ∈ italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT, resulting in the point’s geometry feature f p∈ℝ 128 subscript 𝑓 𝑝 superscript ℝ 128 f_{p}\in\mathbb{R}^{128}italic_f start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 128 end_POSTSUPERSCRIPT. Subsequently, an MLP network integrates the geometry feature f p subscript 𝑓 𝑝 f_{p}italic_f start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT with action a⁢c⁢t p 𝑎 𝑐 subscript 𝑡 𝑝 act_{p}italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, predicting success probability c a⁢c⁢t subscript 𝑐 𝑎 𝑐 𝑡 c_{act}italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT.
77
+
78
+ However, this Fine module possesses a limitation: it disregards global geometric context. Consequently, it lacks 1) comprehensive awareness of boundary geometry in the closer view, and 2) consideration of vital joint information essential for manipulating articulated objects. To address this, we introduce an integration between the Coarse and Fine modules, enabling the transfer of global geometric insights from O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT to the Fine section.
79
+
80
+ ### IV-C Coarse-and-fine Integration
81
+
82
+ In order to fully utilize both far and near view information, intuitively these two views should be aligned and unified under the same world frame for further manipulation. Therefore, to establish a connection between the Coarse and Fine modules, we implement a unique design leveraging PointNet++. This design enables simultaneous encoding of O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT and O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT, facilitating the transfer of global information from O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT to O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT while preserving local, detailed geometric information.
83
+
84
+ In practical terms, as depicted in Fig.[2](https://arxiv.org/html/2402.18699v2#S4.F2 "Figure 2 ‣ IV Method ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise"), we employ two separate PointNet++ encoders for O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT and O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT. The latter layers of PointNet++ contain global insights, whereas the former layers retain intricate local details. To bridge the gap, we employ PointNet++’s Feature Propagation (FP) modules. Specifically, we interpolate the later layer’s output onto the former layer, concatenate this interpolated feature with the feature from the former layer, and then process the concatenated feature through a unit point net to derive the final feature representation.
85
+
86
+ ### IV-D Actor Module
87
+
88
+ To manipulate the object, our Actor module accounts for proposing feasible actions (_i.e._, the initial pulling or pushing direction), given the manipulation point. Specifically, we utilize a conditional Variational Autoencoder (cVAE) to propose suitable actions a⁢c⁢t p 𝑎 𝑐 subscript 𝑡 𝑝 act_{p}italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT based on the geometric characteristics of a given point p∈O n⁢e⁢a⁢r 𝑝 superscript 𝑂 𝑛 𝑒 𝑎 𝑟 p\in O^{near}italic_p ∈ italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT.
89
+
90
+ ### IV-E Implementation Details
91
+
92
+ Our data collection aims to capture effective manipulation dynamics, vital for training and our framework. We focus on creating a comprehensive dataset that robustly represents object-action interactions for point cloud affordance learning.
93
+
94
+ Our systematic approach centers on interaction instances denoted as D=O f⁢a⁢r,p f⁢a⁢r,O n⁢e⁢a⁢r,p,a⁢c⁢t p,g⁢t 𝐷 superscript 𝑂 𝑓 𝑎 𝑟 subscript 𝑝 𝑓 𝑎 𝑟 superscript 𝑂 𝑛 𝑒 𝑎 𝑟 𝑝 𝑎 𝑐 subscript 𝑡 𝑝 𝑔 𝑡 D={O^{far},p_{far},O^{near},p,act_{p},gt}italic_D = italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT , italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT , italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT , italic_p , italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_g italic_t. Here, p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT signifies the examined point from O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT, while p 𝑝 p italic_p is the manipulation point within O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT. a⁢c⁢t p 𝑎 𝑐 subscript 𝑡 𝑝 act_{p}italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT represents the action, and g⁢t∈0,1 𝑔 𝑡 0 1 gt\in{0,1}italic_g italic_t ∈ 0 , 1 indicates success based on exceeding a threshold τ 𝜏\tau italic_τ.
95
+
96
+ Data collection efficiency is enhanced with Where2Act[[36](https://arxiv.org/html/2402.18699v2#bib.bib36)] pulling networks. Randomly sampling actionable points from point clouds in pulling tasks is inefficient due to high failure rates. By using the pre-trained Where2Act network, we streamline p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT, p 𝑝 p italic_p, and a⁢c⁢t p 𝑎 𝑐 subscript 𝑡 𝑝 act_{p}italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT selection, drastically reducing collection time. This allows for a larger dataset, significantly boosting network model performance.
97
+
98
+ During training, we first jointly train the Coarse and Fine modules using positive and negative data. Subsequently, we train the Actor module with positive data and the PointNet++ network from the Coarse and Fine modules.
99
+
100
+ Loss for the Coarse module. As outlined in section 4.1, when p 𝑝 p italic_p is centrally located in the near view, we use a p g⁢t=g⁢t superscript subscript 𝑎 𝑝 𝑔 𝑡 𝑔 𝑡 a_{p}^{gt}=gt italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT = italic_g italic_t for direct supervision of the Coarse module training. The near view is a closer look at p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT, and defining its "middle" pertains to the surrounding area of p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT. Therefore, we apply a p g⁢t=g⁢t superscript subscript 𝑎 𝑝 𝑔 𝑡 𝑔 𝑡 a_{p}^{gt}=gt italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT = italic_g italic_t supervision when the distance between p 𝑝 p italic_p and p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT is below a set threshold.
101
+
102
+ A standard binary cross entropy loss is utilized here to measure the loss between the predicted score a p subscript 𝑎 𝑝 a_{p}italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and its ground truth a p g⁢t superscript subscript 𝑎 𝑝 𝑔 𝑡 a_{p}^{gt}italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT. The loss is defined as:
103
+
104
+ ℒ 𝐶𝑜𝑎𝑟𝑠𝑒=−(a p g⁢t⁢log⁡(a p)+(1−a p g⁢t)⁢log⁡(1−a p))subscript ℒ 𝐶𝑜𝑎𝑟𝑠𝑒 superscript subscript 𝑎 𝑝 𝑔 𝑡 subscript 𝑎 𝑝 1 superscript subscript 𝑎 𝑝 𝑔 𝑡 1 subscript 𝑎 𝑝\mathcal{L}_{\textit{Coarse}}=-(a_{p}^{gt}\log(a_{p})+(1-a_{p}^{gt})\log(1-a_{% p}))caligraphic_L start_POSTSUBSCRIPT Coarse end_POSTSUBSCRIPT = - ( italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT roman_log ( italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) + ( 1 - italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT ) roman_log ( 1 - italic_a start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) )
105
+
106
+ Loss for the Fine module. Similar to the loss for the Coarse module, given a piece of data D 𝐷 D italic_D, and the prediction of the Fine module c a⁢c⁢t subscript 𝑐 𝑎 𝑐 𝑡 c_{act}italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT, we directly define its ground truth c a⁢c⁢t g⁢t=g⁢t superscript subscript 𝑐 𝑎 𝑐 𝑡 𝑔 𝑡 𝑔 𝑡 c_{act}^{gt}=gt italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT = italic_g italic_t. Then use a standard binary cross entropy loss to train the network,
107
+
108
+ ℒ 𝐹𝑖𝑛𝑒=−(c a⁢c⁢t g⁢t⁢log⁡(c a⁢c⁢t)+(1−c a⁢c⁢t g⁢t)⁢log⁡(1−c a⁢c⁢t))subscript ℒ 𝐹𝑖𝑛𝑒 superscript subscript 𝑐 𝑎 𝑐 𝑡 𝑔 𝑡 subscript 𝑐 𝑎 𝑐 𝑡 1 superscript subscript 𝑐 𝑎 𝑐 𝑡 𝑔 𝑡 1 subscript 𝑐 𝑎 𝑐 𝑡\mathcal{L}_{\textit{Fine}}=-(c_{act}^{gt}\log(c_{act})+(1-c_{act}^{gt})\log(1% -c_{act}))caligraphic_L start_POSTSUBSCRIPT Fine end_POSTSUBSCRIPT = - ( italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT roman_log ( italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT ) + ( 1 - italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_t end_POSTSUPERSCRIPT ) roman_log ( 1 - italic_c start_POSTSUBSCRIPT italic_a italic_c italic_t end_POSTSUBSCRIPT ) )
109
+
110
+ Loss for the Actor module. Following Where2act[[5](https://arxiv.org/html/2402.18699v2#bib.bib5)], which uses an action scoring module D s subscript 𝐷 𝑠 D_{s}italic_D start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and s R|p=D s⁢(f p,R)>0.5 subscript 𝑠 conditional 𝑅 𝑝 subscript 𝐷 𝑠 subscript 𝑓 𝑝 𝑅 0.5 s_{R|p}=D_{s}(f_{p},R)>0.5 italic_s start_POSTSUBSCRIPT italic_R | italic_p end_POSTSUBSCRIPT = italic_D start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_f start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_R ) > 0.5 indicates a positive action proposal R 𝑅 R italic_R, for a batch of B 𝐵 B italic_B interactions with their encoded point features {(f p i,a⁢c⁢t p i,r i)}subscript 𝑓 subscript 𝑝 𝑖 𝑎 𝑐 subscript 𝑡 subscript 𝑝 𝑖 subscript 𝑟 𝑖\{(f_{p_{i}},act_{p_{i}},r_{i})\}{ ( italic_f start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) }, r i=1/0 subscript 𝑟 𝑖 1 0 r_{i}=1/0 italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 / 0 denotes ground-truth interaction outcome, we let:
111
+
112
+ ℒ 𝐴𝑐𝑡𝑜𝑟=−1 B⁢∑i r i⁢log⁡(D s⁢(f p i,R i))+(1−r i)⁢log⁡(1−D i⁢(f p i,R i))subscript ℒ 𝐴𝑐𝑡𝑜𝑟 1 𝐵 subscript 𝑖 subscript 𝑟 𝑖 subscript 𝐷 𝑠 subscript 𝑓 subscript 𝑝 𝑖 subscript 𝑅 𝑖 1 subscript 𝑟 𝑖 1 subscript 𝐷 𝑖 subscript 𝑓 subscript 𝑝 𝑖 subscript 𝑅 𝑖\mathcal{L}_{\textit{Actor}}=-\frac{1}{B}\sum_{i}r_{i}\log(D_{s}(f_{p_{i}},R_{% i}))+(1-r_{i})\log(1-D_{i}(f_{p_{i}},R_{i}))caligraphic_L start_POSTSUBSCRIPT Actor end_POSTSUBSCRIPT = - divide start_ARG 1 end_ARG start_ARG italic_B end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_log ( italic_D start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_f start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) + ( 1 - italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) roman_log ( 1 - italic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_f start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) )
113
+
114
+ V Experiments
115
+ -------------
116
+
117
+ ![Image 3: Refer to caption](https://arxiv.org/html/2402.18699v2/x3.png)
118
+
119
+ Figure 3: Visualization of far view coarse affordance and near view fine affordance on noisy point clouds. The task for the microwave is “push close”, while the task for others is “pull open”. The first three shapes come from simulation, while the last two come from real-world scans.
120
+
121
+ This section covers the experimental setup, quantitative evaluations, and visual results, providing a comprehensive analysis of our proposed approach.
122
+
123
+ ### V-A Experimental Setup
124
+
125
+ Using the SAPIEN physical simulator[[37](https://arxiv.org/html/2402.18699v2#bib.bib37)] and the PartNet-Mobility dataset[[38](https://arxiv.org/html/2402.18699v2#bib.bib38)], equipped with a ray-tracing depth camera [[39](https://arxiv.org/html/2402.18699v2#bib.bib39)] that generates noisy point cloud with the same principle of real world sensors, our experiments rigorously evaluate our approach. Through visualizations and baseline comparisons, we validate our design’s effectiveness both intuitively and quantitatively.
126
+
127
+ #### V-A 1 Environment
128
+
129
+ Building on Where2act, our novel approach simplifies robot arm complexities. Simulated object placement adheres to a zero-centered configuration, enhancing experimental control. We utilize a flying gripper or suctor based on categories for interactions, bypassing reachability challenges and focusing on object-centric policy learning.
130
+
131
+ Unlike previous methods using perfect point cloud, we utilize a ray-tracing depth camera [[39](https://arxiv.org/html/2402.18699v2#bib.bib39)] for authentic point cloud generation in training and testing, which mitigates the sim-to-real disparities caused by unrealistic point clouds. The camera’s role is two-fold: one is recording the distant view (O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT) from a predetermined position, and the other is capturing the nearby view (O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT) parallel to the ground.
132
+
133
+ #### V-A 2 Tasks
134
+
135
+ Inspired by VAT-Mart and built on Where2act’s task and action design, we employ ‘PULL OPEN’ and ‘PUSH CLOSE’ as tasks. Moreover, we refine the pulling action for ‘PULL OPEN’ in Where2act. Specifically, we separate the grasp pose from the subsequent movement, introducing a horizontal backward motion in the world frame for post-grasp movement. This adjustment streamlines our Fine component by simplifying movement direction considerations, focusing on accurate grasping action assessment.
136
+
137
+ #### V-A 3 Dataset
138
+
139
+ Our dataset comprises 8 categories, divided into 5 for training (StorageFurniture, Drawer, Microwave, Refrigerator, Kettle) and 3 for novel evaluation (WashingMachine, Pot, Safe). Each category is further divided into training and testing object sets. For instance, consider the StorageFurniture category: the 345 objects are divided into a training set of 270 training objects and 75 testing objects.
140
+
141
+ To gather interaction data, our process involves several steps. First, we select an object from the training set, position it in the scene, and capture O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT using the camera to take a picture at the far and predetermined location. Then, we randomly identify p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT within O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT and position the camera at the same horizontal location as p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT but 0.6 units ahead for scanning O n⁢e⁢a⁢r superscript 𝑂 𝑛 𝑒 𝑎 𝑟 O^{near}italic_O start_POSTSUPERSCRIPT italic_n italic_e italic_a italic_r end_POSTSUPERSCRIPT. The depth information vanishes if the camera is moved too close to the object, so we must keep a reasonable distance. After the manipulation point p 𝑝 p italic_p is designated, we execute action a⁢c⁢t p 𝑎 𝑐 subscript 𝑡 𝑝 act_{p}italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and evaluate the result.
142
+
143
+ For selecting p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT, p 𝑝 p italic_p, and a⁢c⁢t p 𝑎 𝑐 subscript 𝑡 𝑝 act_{p}italic_a italic_c italic_t start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, we use a strategy to boost positive data generation and speed up collection. This includes occasionally using pre-trained Where2act networks for decision-making, increasing favorable outcomes.
144
+
145
+ Our dataset maintains equilibrium by equally using positive and negative data, each comprising half of the collection. Unlike prior methods, we exclude the need for part masks during testing. This simplification removes the burden of masking in real-world experiments, streamlining the workflow.
146
+
147
+ TABLE I: Comparisons of our method with baselines and ablations under “pull open” task.
148
+
149
+ Pull Open Unseen Objects from Train Categories Test Categories
150
+ Method AVG![Image 4: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/door.png)![Image 5: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/drawers.png)![Image 6: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/microwave-oven.png)![Image 7: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/fridge.png)![Image 8: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/kettle.png)AVG![Image 9: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/safe.png)![Image 10: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/washing-machine.png)![Image 11: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/pot.png)
151
+ Where2Act 0.21 0.11 0.16 0.33 0.27 0.18 0.21 0.25 0.22 0.15
152
+ FlowBot3D 0.29 0.18 0.22 0.42 0.39 0.22 0.27 0.31 0.26 0.24
153
+ UMPNet 0.35 0.25 0.29 0.41 0.49 0.29 0.32 0.36 0.31 0.28
154
+ VAT-Mart 0.38 0.28 0.23 0.56 0.52 0.29 0.35 0.46 0.32 0.27
155
+ Ours Random-Coarse 0.33 0.35 0.25 0.38 0.48 0.21 0.24 0.31 0.22 0.20
156
+ Ours Random-Fine 0.35 0.21 0.26 0.49 0.54 0.24 0.27 0.41 0.21 0.18
157
+ Ours Separate 0.50 0.45 0.49 0.59 0.53 0.43 0.39 0.48 0.30 0.39
158
+ Ours Final 0.61 0.58 0.63 0.68 0.64 0.51 0.50 0.56 0.48 0.45
159
+
160
+ TABLE II: Comparisons of our method with baselines and ablations under “push close” task.
161
+
162
+ Push Close Unseen Objects from Train Categories Test Categories
163
+ Method AVG![Image 12: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/door.png)![Image 13: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/drawers.png)![Image 14: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/microwave-oven.png)![Image 15: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/fridge.png)![Image 16: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/kettle.png)AVG![Image 17: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/safe.png)![Image 18: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/washing-machine.png)![Image 19: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/pot.png)
164
+ Where2Act 0.80 0.81 0.75 0.85 0.81 0.77 0.66 0.66 0.52 0.80
165
+ FlowBot3D 0.86 0.90 0.84 0.85 0.88 0.81 0.69 0.72 0.58 0.78
166
+ UMPNet 0.84 0.83 0.86 0.88 0.82 0.83 0.75 0.78 0.63 0.84
167
+ VAT-Mart 0.91 0.93 0.93 0.91 0.92 0.87 0.77 0.83 0.59 0.89
168
+ Ours Random-Coarse 0.43 0.50 0.48 0.39 0.47 0.33 0.40 0.40 0.27 0.53
169
+ Ours Random-Fine 0.79 0.86 0.83 0.75 0.80 0.71 0.53 0.76 0.21 0.61
170
+ Ours Separate 0.92 0.91 0.94 0.96 0.91 0.90 0.71 0.74 0.57 0.83
171
+ Ours Final 0.96 0.95 0.98 0.98 0.95 0.93 0.85 0.88 0.71 0.96
172
+
173
+ ### V-B Baselines, Ablations and Evaluation Metrics
174
+
175
+ We conduct at least 200 interactions in random test shapes and calculate successful rates.
176
+
177
+ For a fair comparison, we have constrained the scope of the baseline studies to the affordance method, thereby highlighting the superiority of our approach. Specifically, we compare against four baselines (Where2act[[36](https://arxiv.org/html/2402.18699v2#bib.bib36)], FlowBot3D[[25](https://arxiv.org/html/2402.18699v2#bib.bib25)], UMPNet[[40](https://arxiv.org/html/2402.18699v2#bib.bib40)], VAT-Mart[[6](https://arxiv.org/html/2402.18699v2#bib.bib6)]). We retain their architectures, replacing training data with our realistic point cloud data and tasks, showcasing the strength of our approach.
178
+
179
+ We also implement three ablations to demonstrate the significance of coarse affordance, fine affordance and coarse-to-fine integration.
180
+
181
+ * •Ours Random-Coarse: We eliminate the Coarse module and choose p f⁢a⁢r subscript 𝑝 𝑓 𝑎 𝑟 p_{far}italic_p start_POSTSUBSCRIPT italic_f italic_a italic_r end_POSTSUBSCRIPT randomly.
182
+ * •Ours Random-Fine: We eliminate the Fine module and choose p f⁢i⁢n⁢e subscript 𝑝 𝑓 𝑖 𝑛 𝑒 p_{fine}italic_p start_POSTSUBSCRIPT italic_f italic_i italic_n italic_e end_POSTSUBSCRIPT randomly.
183
+ * •Ours Separate: In the Fine module, instead of making the integration using interpolation, we first encode O f⁢a⁢r superscript 𝑂 𝑓 𝑎 𝑟 O^{far}italic_O start_POSTSUPERSCRIPT italic_f italic_a italic_r end_POSTSUPERSCRIPT into a vector z 𝑧 z italic_z as an extra input along with the point features f p subscript 𝑓 𝑝 f_{p}italic_f start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT.
184
+
185
+ The first two are used to demonstrate the importance of the Coarse module (_i.e._, the benefits of choosing a zoom-in point with the highest affordance score generated by the far view) and the Fine module while the last one is used to exhibit the effectiveness of our coarse-to-fine integration design.
186
+
187
+ ### V-C Results and Analysis
188
+
189
+ Tab.[I](https://arxiv.org/html/2402.18699v2#S5.T1 "TABLE I ‣ V-A3 Dataset ‣ V-A Experimental Setup ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise") and [II](https://arxiv.org/html/2402.18699v2#S5.T2 "TABLE II ‣ V-A3 Dataset ‣ V-A Experimental Setup ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise") show the numerical comparisons with baselines and ablations, and Fig.[3](https://arxiv.org/html/2402.18699v2#S5.F3 "Figure 3 ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise") shows the visualization of coarse and fine affordance on noisy point cloud generated in simulation or scanned in the real world.
190
+
191
+ In our comparative analysis, we emphasize the effectiveness and efficiency of our key components: coarse affordance, fine affordance, zoom-in policy, and the integration of Coarse and Fine modules. These elements collectively shape the success and performance of our framework.
192
+
193
+ Shown in Fig[3](https://arxiv.org/html/2402.18699v2#S5.F3 "Figure 3 ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise"), Coarse module captures overall object geometry and proposes reasonable zoom-in points. Meanwhile, Fine module refines results using detailed geometry.
194
+
195
+ In zoom-in policy evaluation, we compare with currently most strong baselines, while are in lack of zoom-in. Our model outperforms all baselines, as the baselines rely heavily on detailed geometries, while noisy far view point cloud could not reflect fine-grained geometries.
196
+
197
+ Results in Table[I](https://arxiv.org/html/2402.18699v2#S5.T1 "TABLE I ‣ V-A3 Dataset ‣ V-A Experimental Setup ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise") and [II](https://arxiv.org/html/2402.18699v2#S5.T2 "TABLE II ‣ V-A3 Dataset ‣ V-A Experimental Setup ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise") highlight the effectiveness of our coarse and fine affordance respectively. Comparing _Ours-Final_ to _Ours-Random-Coarse_ and _Ours-Random-Fine_ ablations, our approach, selecting points strategically based on coarse affordance, and then selecting points strategically based on fine affordance, outperforms respective random point selection. This reinforces that our method improves interaction efficiency by intelligently prioritizing points with strong coarse affordance and then fine affordance.
198
+
199
+ Comparing _Ours-Final_ with the _Ours-Separate_ ablation provides compelling evidence for our integration design’s necessity and effectiveness. Our approach unifies far and near affordance encoding, while the ablation treats them separately. The results convincingly reveal that the performance could be enhanced through improved information coherence and integration, showcasing our pipeline’s robustness.
200
+
201
+ TABLE III: Success Rate of Manipulation in the Real World.
202
+
203
+ pull open push close
204
+ Method![Image 20: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/door.png)![Image 21: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/drawers.png)![Image 22: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/kettle.png)![Image 23: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/pot.png)![Image 24: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/drawers.png)![Image 25: [Uncaptioned image]](https://arxiv.org/html/2402.18699v2/extracted/5454863/tabs/images/pot.png)
205
+ FlowBot3D 3/10 2/10 3/10 2/10 7/10 6/10
206
+ VAT-Mart 4/10 4/10 2/10 3/10 7/10 8/10
207
+ Ours 6/10 7/10 6/10 5/10 9/10 10/10
208
+
209
+ ### V-D Real-world Experiment
210
+
211
+ To verify our method’s real-world applicability and its ability, we conduct experiments using a 7-DOF Franka Emika Panda robot with a Realsense camera mounted on its hand. The full setup can be seen in Fig.[1](https://arxiv.org/html/2402.18699v2#S1.F1 "Figure 1 ‣ I INTRODUCTION ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise").
212
+
213
+ In real-world experiments, we sequentially capture far and near point cloud to generate the affordance map as the object-centric representation which provides us the manipulation point and interaction direction. To achieve long-term manipulation, we employ a heuristic method to sequentially output the next action and execute it with cartesian impedance control which transforms the end-effector action to the robot.
214
+
215
+ We conduct experiments on different categories of articulated objects, and conduct 10 interactions on each object. Results in Fig.[3](https://arxiv.org/html/2402.18699v2#S5.F3 "Figure 3 ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise") and Tab.[III](https://arxiv.org/html/2402.18699v2#S5.T3 "TABLE III ‣ V-C Results and Analysis ‣ V Experiments ‣ Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise") demonstrate our method can simply apply to real world without any other techniques.
216
+
217
+ VI Conclusion
218
+ -------------
219
+
220
+ In conclusion, our study presents a novel coarse-to-fine pipeline that effectively utilizes both far and near point clouds, integrating them to mitigate the noisy point cloud problem. Through comprehensive evaluations, we have demonstrated that our method surpasses the baseline and ablations across all 8 categories considered. Furthermore, our approach, by leveraging an affordance method, exhibits a favorable generalizing ability on novel shapes and categories, indicating its potential for real-world applications. It is worth noting that our method requires only commercially available cameras, making it accessible for widespread adoption.
221
+
222
+ ACKNOWLEDGMENT
223
+ --------------
224
+
225
+ This project was supported by The National Youth Talent Support Program (8200800081) and National Natural Science Foundation of China (No. 62136001).
226
+
227
+ References
228
+ ----------
229
+
230
+ * [1] S.Deng, X.Xu, C.Wu, K.Chen, and K.Jia, “3d affordancenet: A benchmark for visual object affordance understanding,” in _proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2021, pp. 1778–1787.
231
+ * [2] K.M. Varadarajan and M.Vincze, “Afnet: The affordance network,” in _Asian conference on computer vision_.Springer, 2012, pp. 512–523.
232
+ * [3] J.Borja-Diaz, O.Mees, G.Kalweit, L.Hermann, J.Boedecker, and W.Burgard, “Affordance learning from play for sample-efficient policy learning,” in _2022 International Conference on Robotics and Automation (ICRA)_.IEEE, 2022, pp. 6372–6378.
233
+ * [4] Y.Ju, K.Hu, G.Zhang, G.Zhang, M.Jiang, and H.Xu, “Robo-abc: Affordance generalization beyond categories via semantic correspondence for robot manipulation,” _arXiv preprint arXiv:2401.07487_, 2024.
234
+ * [5] K.Mo, L.J. Guibas, M.Mukadam, A.Gupta, and S.Tulsiani, “Where2act: From pixels to actions for articulated 3d objects,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, October 2021, pp. 6813–6823.
235
+ * [6] R.Wu, Y.Zhao, K.Mo, Z.Guo, Y.Wang, T.Wu, Q.Fan, X.Chen, L.Guibas, and H.Dong, “VAT-mart: Learning visual action trajectory proposals for manipulating 3d ARTiculated objects,” in _International Conference on Learning Representations_, 2022. [Online]. Available: [https://openreview.net/forum?id=iEx3PiooLy](https://openreview.net/forum?id=iEx3PiooLy)
236
+ * [7] Y.Wang, R.Wu, K.Mo, J.Ke, Q.Fan, L.Guibas, and H.Dong, “AdaAfford: Learning to adapt manipulation affordance for 3d articulated objects via few-shot interactions,” _European conference on computer vision_, 2022.
237
+ * [8] Y.Zhao, R.Wu, Z.Chen, Y.Zhang, Q.Fan, K.Mo, and H.Dong, “Dualafford: Learning collaborative visual affordance for dual-gripper object manipulation,” _International Conference on Learning Representations (ICLR)_, 2023.
238
+ * [9] R.Wu, K.Cheng, Y.Zhao, C.Ning, G.Zhan, and H.Dong, “Learning environment-aware affordance for 3d articulated object manipulation under occlusions,” in _Thirty-seventh Conference on Neural Information Processing Systems_, 2023. [Online]. Available: [https://openreview.net/forum?id=Re2NHYoZ5l](https://openreview.net/forum?id=Re2NHYoZ5l)
239
+ * [10] R.Wu, C.Ning, and H.Dong, “Learning foresightful dense visual affordance for deformable object manipulation,” in _IEEE International Conference on Computer Vision (ICCV)_, 2023.
240
+ * [11] C.R. Qi, L.Yi, H.Su, and L.J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” _arXiv preprint arXiv:1706.02413_, 2017.
241
+ * [12] U.M. Nunes and Y.Demiris, “Online unsupervised learning of the 3d kinematic structure of arbitrary rigid bodies,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 3809–3817.
242
+ * [13] X.Li, H.Wang, L.Yi, L.Guibas, A.L. Abbott, and S.Song, “Category-level articulated object pose estimation,” _arXiv preprint arXiv:1912.11913_, 2019.
243
+ * [14] J.Xu, S.Song, and M.Ciocarlie, “Tandem: Learning joint exploration and decision making with tactile sensors,” _IEEE Robotics and Automation Letters_, 2022.
244
+ * [15] Y.Du, R.Wu, Y.Shen, and H.Dong, “Learning part motion of articulated objects using spatially continuous neural implicit representations,” in _British Machine Vision Conference (BMVC)_, November 2023.
245
+ * [16] Z.Xu, H.Zhanpeng, and S.Song, “Umpnet: Universal manipulation policy network for articulated objects,” _IEEE Robotics and Automation Letters_, 2022.
246
+ * [17] S.Song, A.Zeng, J.Lee, and T.Funkhouser, “Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations,” _Robotics and Automation Letters_, 2020.
247
+ * [18] I.Akinola, J.Xu, S.Song, and P.K. Allen, “Dynamic grasping with reachability and motion awareness,” in _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_.IEEE, 2021, pp. 9422–9429.
248
+ * [19] H.-S. Fang, C.Wang, M.Gou, and C.Lu, “Graspnet-1billion: A large-scale benchmark for general object grasping,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR)_, 2020, pp. 11 444–11 453.
249
+ * [20] M.Kokic, D.Kragic, and J.Bohg, “Learning task-oriented grasping from human activity datasets,” _IEEE Robotics and Automation Letters_, vol.5, no.2, pp. 3352–3359, 2020.
250
+ * [21] I.Lenz, H.Lee, and A.Saxena, “Deep learning for detecting robotic grasps,” _The International Journal of Robotics Research_, vol.34, no. 4-5, pp. 705–724, 2015.
251
+ * [22] J.Redmon and A.Angelova, “Real-time grasp detection using convolutional neural networks,” in _2015 IEEE International Conference on Robotics and Automation (ICRA)_.IEEE, 2015, pp. 1316–1322.
252
+ * [23] Y.Qin, R.Chen, H.Zhu, M.Song, J.Xu, and H.Su, “S4g: Amodal single-view single-shot se (3) grasp detection in cluttered scenes,” in _Conference on robot learning_.PMLR, 2020, pp. 53–65.
253
+ * [24] K.Bousmalis, A.Irpan, P.Wohlhart, Y.Bai, M.Kelcey, M.Kalakrishnan, L.Downs, J.Ibarz, P.Pastor, K.Konolige _et al._, “Using simulation and domain adaptation to improve efficiency of deep robotic grasping,” in _2018 IEEE international conference on robotics and automation (ICRA)_.IEEE, 2018, pp. 4243–4250.
254
+ * [25] B.Eisner, H.Zhang, and D.Held, “Flowbot3d: Learning 3d articulation flow to manipulate articulated objects,” _arXiv preprint arXiv:2205.04382_, 2022.
255
+ * [26] G.Schiavi, P.Wulkop, G.Rizzi, L.Ott, R.Siegwart, and J.J. Chung, “Learning agent-aware affordances for closed-loop interaction with articulated objects,” in _2023 IEEE International Conference on Robotics and Automation (ICRA)_.IEEE, 2023, pp. 5916–5922.
256
+ * [27] H.Luo, W.Zhai, J.Zhang, Y.Cao, and D.Tao, “Leverage interactive affinity for affordance learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 6809–6819.
257
+ * [28] J.J. Gibson, “The theory of affordances,” _Hilldale, USA_, vol.1, no.2, pp. 67–82, 1977.
258
+ * [29] T.Nagarajan, C.Feichtenhofer, and K.Grauman, “Grounded human-object interaction hotspots from video,” in _ICCV_, 2019.
259
+ * [30] T.Nagarajan and K.Grauman, “Learning affordance landscapes for interaction exploration in 3d environments,” in _NeurIPS_, 2020.
260
+ * [31] P.Mandikal and K.Grauman, “Learning dexterous grasping with object-centric visual affordances,” in _IEEE International Conference on Robotics and Automation (ICRA)_, 2021.
261
+ * [32] E.Corona, A.Pumarola, G.Alenya, F.Moreno-Noguer, and G.Rogez, “Ganhand: Predicting human grasp affordances in multi-object scenes,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 5031–5041.
262
+ * [33] K.M. Varadarajan and M.Vincze, “Afrob: The affordance network ontology for robots,” in _2012 IEEE/RSJ international conference on intelligent robots and systems_.IEEE, 2012, pp. 1343–1350.
263
+ * [34] Y.Geng, B.An, H.Geng, Y.Chen, Y.Yang, and H.Dong, “End-to-end affordance learning for robotic manipulation,” _arXiv preprint arXiv:2209.12941_, 2022.
264
+ * [35] C.Ning, R.Wu, H.Lu, K.Mo, and H.Dong, “Where2explore: Few-shot affordance learning for unseen novel categories of articulated objects,” _arXiv preprint arXiv:2309.07473_, 2023.
265
+ * [36] K.Mo, L.J. Guibas, M.Mukadam, A.Gupta, and S.Tulsiani, “Where2act: From pixels to actions for articulated 3d objects,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 6813–6823.
266
+ * [37] F.Xiang, Y.Qin, K.Mo, Y.Xia, H.Zhu, F.Liu, M.Liu, H.Jiang, Y.Yuan, H.Wang, L.Yi, A.X. Chang, L.J. Guibas, and H.Su, “SAPIEN: A simulated part-based interactive environment,” in _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2020.
267
+ * [38] K.Mo, S.Zhu, A.X. Chang, L.Yi, S.Tripathi, L.J. Guibas, and H.Su, “PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding,” in _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019.
268
+ * [39] A.L. Xiaoshuai Zhang, Fanbo Xiang, “Kuafu: A real-time ray tracing renderer,” [https://github.com/jetd1/kuafu](https://github.com/jetd1/kuafu), 2022, accessed: January 18, 2023.
269
+ * [40] Z.Xu, Z.He, and S.Song, “Universal manipulation policy network for articulated objects,” _IEEE Robotics and Automation Letters_, vol.7, no.2, pp. 2447–2454, 2022.