YOLOv11n Human vs Non-Human Classification
This is a fine-tuned YOLOv11s-cls model for binary classification.
π Performance
- Test Accuracy: 98.24% (836/851 correctly classified)
- Dataset: Human faces vs. Non-human (Statues, Art, Anime, Gaming)
- Experiment ID:
yolo11s_cls_260104_086
π Model Usage
1. Using Ultralytics (Python)
The easiest way to use the trained PyTorch model:
from ultralytics import YOLO
# Load the model
model = YOLO('path/to/best.pt')
# Predict on an image
results = model('image.jpg')
# Process results
for result in results:
probs = result.probs # Probs object for classification outputs
print(f"Top-1 class: {result.names[probs.top1]}")
print(f"Confidence: {probs.top1conf:.2f}")
2. Using YOLO CLI
yolo classify predict model=path/to/best.pt source='image.jpg'
3. Using ONNX Runtime
For production or edge deployment without PyTorch:
import onnxruntime as ort
import numpy as np
import cv2
# Initialize session
session = ort.InferenceSession("model.onnx")
# Preprocess (Resize to 224x224, normalize [0, 1], CHW format)
img = cv2.imread("image.jpg")
img = cv2.resize(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), (224, 224))
img = img.astype(np.float32) / 255.0
img = img.transpose(2, 0, 1)[np.newaxis, :] # Add batch dim
# Run Inference
outputs = session.run(None, {"images": img})
predicted_idx = np.argmax(outputs[0])
print(f"Predicted Class Index: {predicted_idx}")
π Baseline Metrics (04/01/2026)
- Model: YOLOv11s-cls
- Input Size: 224x224
- Test Accuracy: 98.24%
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for 8Opt/yolo11-human-nonhuman-cls
Base model
Ultralytics/YOLO11