language:
- en
license: apache-2.0
datasets:
- BLIP3o/BLIP3o-60k
- BLIP3o/BLIP3o-Pretrain-Short-Caption
- BLIP3o/BLIP3o-Pretrain-Long-Caption
pipeline_tag: any-to-any
library_name: diffusers
π BLIP3-o
BLIP3-o is a unified multimodal model that combines the reasoning and instruction following strength of autoregressive models with the generative power of diffusion models. Unlike prior works that diffuse VAE features or raw pixels, BLIP3-o diffuses semantically rich CLIP image features, enabling a powerful and efficient architecture for both image understanding and generation.
π Arxiv
Update
- [2025/05/19] π₯ We understand this is a large codebase, we shared a high-level overview of its Code Structure, feel free to open an issue if you encounter any problems.
- [2025/05/16] π₯ Weβve published a dataset of 20 million images with detailed captions BLIP3o Pretrain Long Caption and 4 million images with short caption BLIP3o Pretrain Short Caption. All images and their captions are compressed into tar archives, no separate image url downloads or manual unzipping required.
- [2025/05/16] π₯ Weβve reorganized and cleaned up the repository to ensure a clear, well-structured codebase. Please give the training and inference scripts a try, and feel free to leave an issue if you run into any problems. We apologize for any confusion caused by our original codebase release.
β¨ Highlights
- Fully Open-Source: Fully open-source training data (Pretraining and Instruction Tuning), training recipe, model weights, code.
- Unified Architecture: for both image understanding and generation.
- CLIP Feature Diffusion: Directly diffuses semantic vision features for stronger alignment and performance.
- State-of-the-art performance: across a wide range of image understanding and generation benchmarks.
Demo
You can try out BLIP3-o in your browser using our interactive Demo.
Install package for training
conda create -n blip3o python=3.11 -y
conda activate blip3o
pip install --upgrade pip setuptools
pip install -r requirements.txt
Inference
You can clone our GitHub repository
git clone https://github.com/JiuhaiChen/BLIP3o.git
Download our checkpoint
python -c "from huggingface_hub import snapshot_download; print(snapshot_download(repo_id='BLIP3o/BLIP3o-Model', repo_type='model'))"
and run the inference code
from diffusers import DiffusionPipeline
from transformers import AutoProcessor, AutoModel
import torch
from blip3o.constants import *
from blip3o.conversation import conv_templates, SeparatorStyle
from blip3o.model.builder import load_pretrained_model
from blip3o.conversation import conv_templates, SeparatorStyle
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
tokenizer, multi_model, context_len = load_pretrained_model(model_path, None, model_name)
pipe = DiffusionPipeline.from_pretrained(
diffusion_path,
custom_pipeline="pipeline_llava_gen",
torch_dtype=torch.bfloat16,
use_safetensors=True,
variant="bf16",
multimodal_encoder=multi_model,
tokenizer=tokenizer,
safety_checker=None
)
pipe.vae.to("cuda")
pipe.unet.to("cuda")
def add_template(prompt):
conv = conv_templates['qwen'].copy()
conv.append_message(conv.roles[0], prompt[0])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
return [prompt]
prompt = "A photo of cute cat"
gen_img = pipe(add_template([f"Please generate image based on the following caption: {prompt}"]), guidance_scale=3.0)
Training
We include two scripts: slurm.sh for multi-node training on Slurm clusters, and run.sh for debugging.
For both slurm.sh and run.sh, you need to import huggingface home HF_HOME, training data folder IMG_FOLDER and output model save folder OUTPUT_FOLDER.
For our open source model training, we combine the pretraining dataset, including both long and short captions, images from JourneyDB. You can download JourneyDB. When training the diffusion transformer from scratch, we recommend using a large number of training steps along with a cosine annealing learning rate schedule that decays from 1Γ10β»β΄ down to 1Γ10β»β΅.
CLIP + Diffusion (Encoder + Decoder)
We also provide two CLIP + Diffusion:
[EVA-CLIP + SDXL]: The model checkpoint already includes the diffusion decoder diffusion-decoder. The EVA-CLIP vision tower weights can be downloaded here EVA-CLIP, the preprocess of EVA-CLIP is in the training code EVA-CLIP-preprocess.
[SigLIP + SANA]: [coming soon]
Supported Tasks
- Text β Text
- Image β Text (Image Understanding)
- Text β Image (Image Generation)
- Image β Image (Image Editing)
- Multitask Training (Image generation and undetstanding mix training)
Supported Image Generation Methods
- CLIP + MSE
- CLIP + Flow Matching
- VAE + Flow Matching
- Transfusion, LMFusion
Supported Autoregressive Backbones
- Qwen-2.5-VL
- LLaMA 3
We suggest to use Qwen-2.5-VL as the backbone, we are fixing some tokenizer issues for LLama3.