MCNeMo / ReferFormer /README.md
dianecy's picture
Upload folder using huggingface_hub
729c925 verified

MeViS

Inference & Evaluation

To perform inference using the trained model, use the following command:

sbatch [inference script file] [/path/to/output_dir] [/path/to/pretrained_weight] --backbone [backbone] --batch_size [batchsize: 1 is desirable] --frame_batch_size [frame batch size]

For example, to run inference with the Swin-Tiny model, execute:

sbatch ./script/dist_test_mevis_swint.sh mevis_dirs/swin_tiny_mevis pretrained_weights/swin-tiny_pretrain.pth --backbone swin_t_p4w7 --visualize --batch_size 1 --frame_batch_size 64

If you would like to visualize the predicted masks, you can add the --visualize flag to the command above.

Training

  • Finetuning

The following command includes both training and inference stages:

sbatch [train script file] [/path/to/output_dir] [/path/to/pretrained_weight] --backbone [backbone] 

For example, to train the Swin-Tiny model, run:

sbatch ./scripts/dist_train_mevis.sh mevis_dirs/swin_tiny_mevis pretrained_weights/swin-tiny_pretrain.pth --backbone swin_t_p4w7 

The training script is based on torch.distributed.launch with SLURM's GPU allocation. Modify the --nproc_per_node option according to the number of GPUs allocated on the node, as shown below:

python3 -m torch.distributed.launch --nproc_per_node=4 --use_env \
main.py --with_box_refine --binary --freeze_text_encoder \