Installation
- Clone this repository:
git clone https://github.com/ZCMax/LLaVA-Next-3D.git
cd LLaVA-Next-3D
- Create the conda environment:
conda create -n llavanext3d python=3.10 -y
conda activate llavanext3d
pip install --upgrade pip # Enable PEP 660 support.
pip install -e ".[train]"
pip install flash-attn --no-build-isolation # install flash attention
Data Preparation
The directory should be orgainized as:
LLaVA-3D-Next # project root
βββ data
β βββ scannet
β β βββ scans
β β βββ posed_images
β β βββ pcd_with_object_aabbs
β β βββ mask
β βββ embodiedscan
β β βββ embodiedscan_infos_full_llava3d_v2.json
β βββ metadata
β β βββ scannet_select_frames.json
β β βββ pcd_discrete_0.1.pkl
β β βββ scannet_train_gt_box.json
β β βββ scannet_val_pred_box.json
β βββ prcoessed
β β βββ multi3drefer_train_llava_style.json
β β βββ multi3drefer_val_llava_style.json
β β βββ ...
We have prepared the well organzied data under /mnt/hwfile/openmmlab/zhuchenming/llava-next-3d-data, you can directly link this to your data. Currently we only support training on ScanNet.
Training & Inference
Full-finetuned Training
You can use sbatch to launch the multi-node script:
sh scripts/3d/train/train_16gpu_sbatch.sh
Inference
sh scripts/3d/eval/eval_scanrefer.sh $CKPT_NAME uniform 32