YOLO26-M-cls

Ultralytics YOLO26 is the latest evolution in the YOLO series, engineered from the ground up for edge and low-power devices. This is the classification variant optimized for image classification tasks.

Model Specifications

Property Value
Input Size 224 pixels
Top-1 Accuracy 78.1%
Top-5 Accuracy 94.2%
CPU Speed (ONNX) 17.2 ms
T4 TensorRT10 Speed 2.0 ms
Parameters 11.6M
FLOPs 4.9B

Key Features

The architecture of YOLO26 is guided by three core principles:

Simplicity: YOLO26 is a native end-to-end model, producing predictions directly without the need for non-maximum suppression (NMS). By eliminating this post-processing step, inference becomes faster, lighter, and easier to deploy in real-world systems.

Deployment Efficiency: The end-to-end design cuts out an entire stage of the pipeline, dramatically simplifying integration, reducing latency, and making deployment more robust across diverse environments.

Training Innovation: YOLO26 introduces the MuSGD optimizer, a hybrid of SGD and Muon — inspired by Moonshot AI's Kimi K2 breakthroughs in LLM training. This optimizer brings enhanced stability and faster convergence.

Additional Highlights

  • DFL Removal: Simplified inference and broader hardware compatibility
  • Up to 43% Faster CPU Inference: Optimized for edge computing
  • MuSGD Optimizer: Advanced optimization methods from LLM training

Usage

Install ultralytics with pip install ultralytics.

Download the model.

from huggingface_hub import hf_hub_download

model_path = hf_hub_download(repo_id="openvision/yolo26-m-cls", filename="model.pt")

Infer.

from ultralytics import YOLO
from PIL import Image
import requests

model = YOLO(model_path)

url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)

# Run inference with the YOLO26m-cls model on the image
results = model.predict(image)

Documentation

For more information, see the official YOLO26 documentation.

Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support