PG-InstructBLIP checkpoint for blip2_t5_instruct + flant5xl (to run on 16GB GPUs)

#4
by Baonq - opened

Hi! Thanks for releasing PG-InstructBLIP and the PhysObjects/PG-VLM work.

I’m trying to run PG-InstructBLIP on a 16GB GPU (Tesla P100/V100). The released checkpoint (pgvlm_weights.bin) targets blip2_t5_instruct with model_type="flant5xxl", which is difficult to run reliably on 16GB GPUs.

In the paper you mention using a smaller VLM (FLAN-T5 XL). Could you please share/upload a PG-InstructBLIP checkpoint fine-tuned with model_type="flant5xl" (e.g., a link to a Hugging Face repo or a direct checkpoint file)?

Thanks a lot!

If you prefer not to share it publicly, I can receive the checkpoint via email (if that’s easier): qbao1607@gmail.com.

Sign up or log in to comment