prithivMLmods/High_Res-vs-Low_Res
Viewer • Updated • 5.02k • 24 • 1
How to use prithivMLmods/High_Res-vs-Low_Res with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="prithivMLmods/High_Res-vs-Low_Res")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoProcessor, AutoModelForImageClassification
processor = AutoProcessor.from_pretrained("prithivMLmods/High_Res-vs-Low_Res")
model = AutoModelForImageClassification.from_pretrained("prithivMLmods/High_Res-vs-Low_Res")# Load model directly
from transformers import AutoProcessor, AutoModelForImageClassification
processor = AutoProcessor.from_pretrained("prithivMLmods/High_Res-vs-Low_Res")
model = AutoModelForImageClassification.from_pretrained("prithivMLmods/High_Res-vs-Low_Res")High_Res-vs-Low_Res is an image classification vision-language encoder model fine-tuned from google/siglip2-base-patch16-224 for a single-label classification task. It is designed to assess the resolution quality of images using the SiglipForImageClassification architecture.
The model categorizes images into two classes:
Classification Report:
precision recall f1-score support
high resolution image 0.5697 0.5407 0.5548 1254
low resolution image 0.8495 0.8639 0.8566 3762
accuracy 0.7831 5016
macro avg 0.7096 0.7023 0.7057 5016
weighted avg 0.7795 0.7831 0.7812 5016
!pip install -q transformers torch pillow gradio
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/High_Res-vs-Low_Res"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
def resolution_classification(image):
"""Predicts image resolution classification."""
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
labels = {"0": "High Resolution Image", "1": "Low Resolution Image"}
predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
return predictions
# Create Gradio interface
iface = gr.Interface(
fn=resolution_classification,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(label="Prediction Scores"),
title="Image Resolution Classification",
description="Upload an image to classify its resolution quality."
)
# Launch the app
if __name__ == "__main__":
iface.launch()
The High_Res-vs-Low_Res model is designed to evaluate the resolution quality of images. It helps distinguish between high-resolution and low-resolution images. Potential use cases include:
Base model
google/siglip2-base-patch16-224
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="prithivMLmods/High_Res-vs-Low_Res") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")