competitions/aiornot
Viewer β’ Updated β’ 62.1k β’ 50 β’ 32
How to use Nahrawy/AIorNot with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-classification", model="Nahrawy/AIorNot")
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png") # Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("Nahrawy/AIorNot")
model = AutoModelForImageClassification.from_pretrained("Nahrawy/AIorNot")# Load model directly
from transformers import AutoImageProcessor, AutoModelForImageClassification
processor = AutoImageProcessor.from_pretrained("Nahrawy/AIorNot")
model = AutoModelForImageClassification.from_pretrained("Nahrawy/AIorNot")Classification model used to classify real images and AI generated images.
The model used is swin-tiny-patch4-window7-224 finetued on aiornot dataset.
To use the model
import torch
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
labels = ["Real", "AI"]
feature_extractor = AutoFeatureExtractor.from_pretrained("Nahrawy/AIorNot")
model = AutoModelForImageClassification.from_pretrained("Nahrawy/AIorNot")
input = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
outputs = model(**input)
logits = outputs.logits
prediction = logits.argmax(-1).item()
label = labels[prediction]
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-classification", model="Nahrawy/AIorNot") pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png")