ViTP-ViT-L-300M-Med

ViTP (Visual Instruction Pretraining) vision backbone — ViT-L 300M variant pretrained on medical imaging domain visual instruction data. Compatible with InternVisionModel from InternVL.

Model Details

  • Architecture: InternVisionModel (ViT-L, 24 layers, 1024 hidden, 16 heads)
  • Image size: 448×448
  • Patch size: 14
  • Domain: Medical imaging

Usage

The model repo includes the modeling code. Load with transformers (no ViTP repo needed):

from transformers import AutoModel, AutoImageProcessor
import torch

device = "cuda"
model = AutoModel.from_pretrained(
    "BiliSakura/ViTP-ViT-L-300M-Med",
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    device_map=device,
).eval()

processor = AutoImageProcessor.from_pretrained("BiliSakura/ViTP-ViT-L-300M-Med")
pixel_values = processor(images="image.jpg", return_tensors="pt").pixel_values.to(device, model.dtype)

with torch.no_grad():
    outputs = model(pixel_values=pixel_values)

# Pooled CLS token: (1, 1024)
features = outputs.pooler_output
# Or full sequence: outputs.last_hidden_state

Citation

@article{Li_2025_ViTP,
  title={Visual Instruction Pretraining for Domain-Specific Foundation Models},
  author={Li, Yuxuan and Zhang, Yicheng and Tang, Wenhao and Dai, Yimian and Cheng, Ming-Ming and Li, Xiang and Yang, Jian},
  journal={arXiv},
  year={2025}
}
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BiliSakura/ViTP-Vit-L-300M-Med

Base model

GreatBird/ViTP
Finetuned
(6)
this model

Collection including BiliSakura/ViTP-Vit-L-300M-Med