How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("image-feature-extraction", model="p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli")
# Load model directly
from transformers import AutoImageProcessor, AutoModel

processor = AutoImageProcessor.from_pretrained("p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli")
model = AutoModel.from_pretrained("p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli")
Quick Links
import torchvision.transforms.v2 as T

image_size = 384
preprocessor = T.Compose(
    [
        T.Resize(
            size=None,
            max_size=image_size,
            interpolation=T.InterpolationMode.NEAREST,
        ),
        T.Pad(
            padding=image_size // 2,
            fill=0,  # black
        ),
        T.CenterCrop(
            size=(image_size, image_size),
        ),
        T.ToDtype(dtype=torch.float32, scale=True), # 0~255 -> 0~1
    ]
)
Downloads last month
16
Safetensors
Model size
93.2M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli

Finetuned
(1)
this model

Space using p1atdev/style_250412.vit_base_patch16_siglip_384.v2_webli 1