library_name: transformers pipeline_tag: image-classification

ViT Cancer Classifier

A Vision Transformer (ViT) model trained for cancer image classification.

Model Details

  • Architecture: Vision Transformer (ViT)
  • Framework: PyTorch
  • Weights format: safetensors

Usage

import torch
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image

# Load model from Hugging Face
MODEL_ID = "anonhs/vit-cancer-classifier"

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model = AutoModelForImageClassification.from_pretrained(MODEL_ID)
processor = AutoImageProcessor.from_pretrained(MODEL_ID)

model.to(device)
model.eval()


def predict_image(image_path):
    # Load image
    image = Image.open(image_path).convert("RGB")

    # Preprocess
    inputs = processor(images=image, return_tensors="pt")
    inputs = {k: v.to(device) for k, v in inputs.items()}

    # Inference
    with torch.no_grad():
        outputs = model(**inputs)
        probs = torch.softmax(outputs.logits, dim=1)
        pred_idx = torch.argmax(probs, dim=1).item()
        confidence = probs[0][pred_idx].item()

    label = model.config.id2label[pred_idx]
    return label, confidence


# Example usage
image_path = "sample_image.jpg"
label, confidence = predict_image(image_path)

print(f"Prediction: {label}")
print(f"Confidence: {confidence:.4f}")
Downloads last month
15
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for anonhs/vit-cancer-classifier

Finetuned
(2476)
this model