Multi-GraspSet / README.md
LHS-LHS's picture
Upload README.md
1b7d584 verified
# Multi-GraspLLM: A Multimodal LLM for Multi-Hand Semantic-Guided Grasp Generation
**[Project Page](https://multi-graspllm.github.io/)** | **[arXiv](https://arxiv.org/abs/2412.08468)**
---
## Updates
- **2025.3**: We add the Jaco hand data which is our final version.
- **2025.1**: We add the Barrett hand data.
- **2024.12**: Released the Multi-GraspSet dataset with meshes of objects.
- **2024.12**: Released the Multi-GraspSet dataset with contact annotations.
---
## Overview
We introduce **Multi-GraspSet**, the first large-scale multi-hand grasp dataset enriched with automatic contact annotations.
<table style="margin: auto; text-align: center;">
<tr>
<td>
<img src="./assets/picture/dataset_construction_new.png" alt="Structure of Multi-GraspLLM" width="1000">
</td>
</tr>
<tr>
<td>
<p>The Construction process of Multi-GraspSet</p>
</td>
</tr>
</table>
<table style="margin: auto; text-align: center;">
<tr>
<td>
<img src="./assets/picture/vis_dataset_grasp_output_01_new.png" alt="Structure of Multi-GraspLLM" width="600">
</td>
</tr>
<tr>
<td>
<p>Visualization of Multi-GraspSet with contact annotations</p>
</td>
</tr>
</table>
---
## Installation
Follow these steps to set up the evaluation environment:
1. **Create the Environment**
```bash
conda create --name eval python=3.9
```
2. **Install PyTorch and Dependencies**
```bash
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
```
> ⚠️ Ensure the CUDA toolkit version matches your installed PyTorch version.
3. **Install Pytorch Kinematics**
```bash
cd ./pytorch_kinematics
pip install -e .
```
4. **Install Remaining Requirements**
```bash
pip install -r requirements_eval.txt
```
---
## Visualize the Dataset
1. **Run the Visualization Code**
Open and execute the `vis_mid_dataset.ipynb` file to visualize the dataset.