YOLO26n (MLX)

Pure-MLX weights for YOLO26n, ready to run on Apple Silicon with yolo-mlx. No PyTorch at runtime, no cloud calls, no waiting on someone else's API — everything stays on your Mac.

This is the smallest variant in the YOLO26 MLX family: ideal for real-time webcam apps, low-latency demos, and anything where every millisecond counts.

Quickstart

pip install yolo-mlx huggingface_hub
from huggingface_hub import hf_hub_download
from yolo26mlx import YOLO

weights = hf_hub_download("webAI-Official/yolo26n-mlx", "yolo26n.npz")
model = YOLO(weights)

results = model.predict("https://ultralytics.com/images/bus.jpg", conf=0.25)
results[0].save()

Specs

Variant mAP@0.5:0.95 FPS (M4 Pro) Best for
yolo26n 40.2% 170 Real-time, low-latency demos

Other variants in this family: yolo26s-mlx · yolo26m-mlx · yolo26l-mlx · yolo26x-mlx

Requirements

  • Apple Silicon Mac (M1, M2, M3, or M4)
  • macOS 14.0+
  • Python 3.10+

Intel Macs are not supported — the whole point of MLX is Apple Silicon native acceleration.

What's in this repo

File Description
yolo26n.npz MLX-format weights, converted from the YOLO26n .pt checkpoint and verified shape-by-shape against the source.
README.md This card.

Training data

Pretrained on COCO (80 classes). For domain-specific use cases, fine-tune on your own data — see the training guide in the upstream repo.

License

AGPL-3.0, inherited from upstream thewebAI/yolo-mlx. Free to use, fork, modify, and ship for personal projects, research, and prototypes. If you deploy this as a hosted service for real users, AGPL requires you to publish your source under the same license.

About webAI

webAI builds the sovereign AI platform — AI that runs on your infrastructure, stays under your control, and compounds with your knowledge. Every release here reflects a simple belief: open models, owned locally, coordinated intelligently, compound into something no centralized system can match.

🌐 webai.com · 💬 community.webai.com

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results