license: cc-by-nc-4.0
pipeline_tag: zero-shot-image-classification
SmartCLIP: Modular Vision-language Alignment with Identification Guarantees
This repository is the official implementation of SmartCLIP, presented in the paper SmartCLIP: Modular Vision-language Alignment with Identification Guarantees.
SmartCLIP is a novel approach that aims to improve CLIP training by providing a mask-based solution for better modular vision-language alignment, especially when dealing with long and short texts. It addresses issues of information misalignment and entangled representations in existing contrastive learning methods, ensuring both the preservation of cross-modal semantic information and the disentanglement of visual representations to capture fine-grained textual concepts.
Code: https://github.com/MidPush/SmartCLIP
Abstract
Contrastive Language-Image Pre-training (CLIP) has emerged as a pivotal model in computer vision and multimodal learning, achieving state-of-the-art performance at aligning visual and textual representations through contrastive learning. However, CLIP struggles with potential information misalignment in many image-text datasets and suffers from entangled representation. On the one hand, short captions for a single image in datasets like MSCOCO may describe disjoint regions in the image, leaving the model uncertain about which visual features to retain or disregard. On the other hand, directly aligning long captions with images can lead to the retention of entangled details, preventing the model from learning disentangled, atomic concepts -- ultimately limiting its generalization on certain downstream tasks involving short prompts. In this paper, we establish theoretical conditions that enable flexible alignment between textual and visual representations across varying levels of granularity. Specifically, our framework ensures that a model can not only preserve cross-modal semantic information in its entirety but also disentangle visual representations to capture fine-grained textual concepts. Building on this foundation, we introduce SmartCLIP, a novel approach that identifies and aligns the most relevant visual and textual representations in a modular manner. Superior performance across various tasks demonstrates its capability to handle information misalignment and supports our identification theory.
Usage
Our model is based on the CLIP framework. You can use it by loading the pre-trained checkpoints and performing inference as shown below.
First, clone the repository and download the trained models:
git clone https://github.com/MidPush/SmartCLIP.git
cd SmartCLIP
mkdir checkpoints
wget https://huggingface.co/Shaoan/SmartCLIP/resolve/main/smartclip_l14.pt -O checkpoints/smartclip_l14.pt
wget https://huggingface.co/Shaoan/SmartCLIP/resolve/main/smartclip_b16.pt -O checkpoints/smartclip_b16.pt
Then, you can use the model for tasks like zero-shot image classification:
from model import longclip
import torch
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = longclip.load("./checkpoints/smartclip_l14.pt", device=device)
text = longclip.tokenize(["A cat is holding a yellow sign", "A dog is holding a yellow sign"]).to(device)
image = preprocess(Image.open("./assets/cat.webp")).unsqueeze(0).to(device)
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
logits_per_image = image_features @ text_features.T
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print("Label probabilities:", probs)
Citation
If you find our work helpful for your research, please consider citing our paper:
@inproceedings{xie2025smartclip,
title={SmartCLIP: Modular Vision-language Alignment with Identification Guarantees},
author={Xie, Shaoan and Lingjing, Lingjing and Zheng, Yujia and Yao, Yu and Tang, Zeyu and Xing, Eric P and Chen, Guangyi and Zhang, Kun},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={29780--29790},
year={2025}
}