BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Paper
•
2301.12597
•
Published
•
2
BLIP-2 is a unified vision-language model designed for tasks such as image captioning, visual question answering, and more. It employs a novel pre-training strategy that leverages frozen pre-trained image encoders and large language models (LLMs) to efficiently bridge the modality gap between vision and language.
BLIP-2 (Bootstrapping Language-Image Pre-training) introduces a lightweight Querying Transformer (Q-Former) that connects a frozen image encoder with a frozen LLM. This architecture enables effective vision-language understanding and generation without the need for end-to-end training of large-scale models. The model is capable of zero-shot image-to-text generation and can follow natural language instructions.