TDSFT_2 / LLaVA-Next-3D /README.md
Repoaner's picture
Upload LLaVA-Next-3D/README.md with huggingface_hub
9a179e3 verified

Installation

  1. Clone this repository:
git clone https://github.com/ZCMax/LLaVA-Next-3D.git
cd LLaVA-Next-3D
  1. Create the conda environment:
conda create -n llavanext3d python=3.10 -y
conda activate llavanext3d
pip install --upgrade pip  # Enable PEP 660 support.
pip install -e ".[train]"
pip install flash-attn --no-build-isolation     # install flash attention

Data Preparation

The directory should be orgainized as:

LLaVA-3D-Next # project root
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ scannet
β”‚   β”‚   β”œβ”€β”€ scans
β”‚   β”‚   β”œβ”€β”€ posed_images
β”‚   β”‚   β”œβ”€β”€ pcd_with_object_aabbs
β”‚   β”‚   └── mask
β”‚   β”œβ”€β”€ embodiedscan
β”‚   β”‚   β”œβ”€β”€ embodiedscan_infos_full_llava3d_v2.json
β”‚   β”œβ”€β”€ metadata
β”‚   β”‚   β”œβ”€β”€ scannet_select_frames.json
β”‚   β”‚   β”œβ”€β”€ pcd_discrete_0.1.pkl
β”‚   β”‚   β”œβ”€β”€ scannet_train_gt_box.json
β”‚   β”‚   └── scannet_val_pred_box.json
β”‚   β”œβ”€β”€ prcoessed
β”‚   β”‚   β”œβ”€β”€ multi3drefer_train_llava_style.json
β”‚   β”‚   β”œβ”€β”€ multi3drefer_val_llava_style.json
β”‚   β”‚   β”œβ”€β”€ ...

We have prepared the well organzied data under /mnt/hwfile/openmmlab/zhuchenming/llava-next-3d-data, you can directly link this to your data. Currently we only support training on ScanNet.

Training & Inference

Full-finetuned Training

You can use sbatch to launch the multi-node script:

sh scripts/3d/train/train_16gpu_sbatch.sh

Inference

sh scripts/3d/eval/eval_scanrefer.sh $CKPT_NAME uniform 32