ViTP-Transformers-Collections
Collection
huggingface/transformers implemtation • 6 items • Updated
ViTP (Visual Instruction Pretraining) vision backbone — ViT-L 300M variant pretrained on general domain visual instruction data. Compatible with InternVisionModel from InternVL.
The model repo includes the modeling code. Load with transformers (no ViTP repo needed):
from transformers import AutoModel, AutoImageProcessor
import torch
device = "cuda"
model = AutoModel.from_pretrained(
"BiliSakura/ViTP-ViT-L-300M-General",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map=device,
).eval()
processor = AutoImageProcessor.from_pretrained("BiliSakura/ViTP-ViT-L-300M-General")
pixel_values = processor(images="image.jpg", return_tensors="pt").pixel_values.to(device, model.dtype)
with torch.no_grad():
outputs = model(pixel_values=pixel_values)
# Pooled CLS token: (1, 1024)
features = outputs.pooler_output
# Or full sequence: outputs.last_hidden_state
@article{Li_2025_ViTP,
title={Visual Instruction Pretraining for Domain-Specific Foundation Models},
author={Li, Yuxuan and Zhang, Yicheng and Tang, Wenhao and Dai, Yimian and Cheng, Ming-Ming and Li, Xiang and Yang, Jian},
journal={arXiv},
year={2025}
}
Base model
GreatBird/ViTP