Multi-GraspSet / README.md
lhslhslhs's picture
add
8dcd0c1
|
raw
history blame
1.63 kB

Multi-GraspLLM: A Multimodal LLM for Multi-Hand Semantic-Guided Grasp Generation

Project Page | arXiv


Overview

We introduce Multi-GraspSet, the first large-scale multi-hand grasp dataset enriched with automatic contact annotations.

Structure of Multi-GraspLLM
The Construction process of Multi-GraspSet
Structure of Multi-GraspLLM
---

Updates

  • 2024.12: Released the Multi-GraspSet dataset with contact annotations.

Installation

Follow these steps to set up the evaluation environment:

  1. Create the Environment

    conda create --name eval python=3.9
    
  2. Install PyTorch and Dependencies

    pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
    

    ⚠️ Ensure the CUDA toolkit version matches your installed PyTorch version.

  3. Install Pytorch Kinematics

    cd ./pytorch_kinematics
    pip install -e .
    
  4. Install Remaining Requirements

    pip install -r requirements_eval.txt
    

Visualize the Dataset

  1. Run the Visualization Code

    Open and execute the vis_mid_dataset.ipynb file to visualize the dataset.