adithyamurali commited on
Commit
2fa5752
·
verified ·
1 Parent(s): 0301311

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - nvidia/PhysicalAI-Robotics-GraspGen
4
+ language:
5
+ - en
6
+ ---
7
+
8
+ Project Website: https://graspgen.github.io/ <br>
9
+ Code: https://github.com/NVlabs/GraspGen/
10
+
11
+ Abstract: Grasping is a fundamental robot skill, yet despite significant research advancements, learning-based 6-DOF grasping approaches are still not turnkey and struggle to generalize across different embodiments and in-the-wild settings. We build upon the recent success on modeling the object-centric grasp generation process as an iterative diffusion process. Our proposed framework - GraspGen - consists of a Diffusion-Transformer architecture that enhances grasp generation, paired with an efficient discriminator to score and filter sampled grasps. We introduce a novel and performant on-generator training recipe for the discriminator. To scale GraspGen to both objects and grippers, we release a new simulated dataset consisting of over 53 million grasps. We demonstrate that GraspGen outperforms prior methods in simulations with singulated objects across different grippers, achieves state-of-the-art performance on the FetchBench grasping benchmark, and performs well on a real robot with noisy visual observations.
12
+
13
+ ## Model Architecture: <br>
14
+ **Architecture Type:** Diffusion Model, Point Cloud network. See paper for more details. <br>
15
+
16
+ ## Input: <br>
17
+ **Input Type(s):** Object partial point cloud X, Number of grasps to sample (B) <br>
18
+ **Input Format(s):** Point Cloud (N X 3) where N is the number of points <br>
19
+ **Input Parameters:** 3D <br>
20
+ **Other Properties Related to Input:** Point cloud needs to be in the form (N X xyz) where N=2048 is the number of points.<br>
21
+
22
+ ## Output: <br>
23
+ **Output Type(s):** Grasp Poses; Corresponding confidence scores <br>
24
+ **Output Format:** Homogenous Transformation matrices; score is a scalar value from 0 to 1 <br>
25
+ **Output Parameters:** [B, 4, 4] where B is the number of generated grasp poses; [B, 1] confidence score <br>
26
+ **Other Properties Related to Output:** <br>