--- license: apache-2.0 --- # FineGrasp: Towards Robust Grasping for Delicate Objects
## 1. Grasp performance in GraspNet-1Billion dataset. | Method | ckpt | Camera | Seen (AP) | Similar (AP) | Novel (AP) | Average (AP) | | --------- | ---- | --------- | --------- | ------------ | ---------- | ---------- | | FineGrasp | finegrasp_pipeline/model.safetensors | Realsense | 71.67 | 62.83 | 27.40 | 53.97 | | FineGrasp + CD | finegrasp_pipeline/model.safetensors | Realsense | 73.71 | 64.56 | 28.14 | 55.47 | | FineGrasp + Simulation Data | finegrasp_pipeline_sim/model.safetensors | Realsense | 70.21 | 61.98 | 26.18 | 52.79 | We also provide a grasp-based baseline for [Challenge Cup](https://developer.d-robotics.cc/tiaozhanbei-2025), please refer to this [code](https://github.com/HorizonRobotics/robo_orchard_lab/tree/master/projects/pick_place_agent) > Notice: finegrasp_pipeline_sim/model.safetensors is trained for [Challenge Cup](https://developer.d-robotics.cc/tiaozhanbei-2025) RoboTwin simulation benchmark. ## 2. Example Infer Code ```Python import os import numpy as np import scipy.io as scio from PIL import Image from robo_orchard_lab.models.finegrasp.processor import GraspInput from huggingface_hub import snapshot_download from robo_orchard_lab.inference import InferencePipelineMixin file_path = snapshot_download( repo_id="HorizonRobotics/FineGrasp", allow_patterns=[ "finegrasp_pipeline/**", "data_example/**" ], ) loaded_pipeline = InferencePipelineMixin.load( os.path.join(file_path, "finegrasp_pipeline") ) rgb_image_path = os.path.join(file_path, "data_example/0000_rgb.png") depth_image_path = os.path.join(file_path, "data_example/0000_depth.png") intrinsic_file = os.path.join(file_path, "data_example/0000.mat") depth_image = np.array(Image.open(depth_image_path), dtype=np.float32) rgb_image = np.array(Image.open(rgb_image_path), dtype=np.float32) / 255.0 intrinsic_matrix = scio.loadmat(intrinsic_file)["intrinsic_matrix"] workspace = [-1, 1, -1, 1, 0.0, 2.0] depth_scale = 1000.0 input_data = GraspInput( rgb_image=rgb_image, depth_image=depth_image, depth_scale=depth_scale, intrinsic_matrix=intrinsic_matrix, workspace=workspace, ) loaded_pipeline.to("cuda") loaded_pipeline.model.eval() output = loaded_pipeline(input_data) print(f"Best grasp pose: {output.grasp_poses[0]}") ``` ## Citation ``` @misc{du2025finegrasp, title={FineGrasp: Towards Robust Grasping for Delicate Objects}, author={Yun Du and Mengao Zhao and Tianwei Lin and Yiwei Jin and Chaodong Huang and Zhizhong Su}, year={2025}, eprint={2507.05978}, archivePrefix={arXiv}, primaryClass={cs.RO}, url={https://arxiv.org/abs/2507.05978}, } ```