YOLO26m (MLX)

Pure-MLX weights for YOLO26m, ready to run on Apple Silicon with yolo-mlx. No PyTorch at runtime, no cloud calls, no waiting on someone else's API — everything stays on your Mac.

This is a mid-size variant in the YOLO26 MLX family: higher accuracy than n/s while still fast enough for many real-time use cases.

Quickstart

pip install yolo-mlx huggingface_hub
from huggingface_hub import hf_hub_download
from yolo26mlx import YOLO

weights = hf_hub_download("webAI-Official/yolo26m-mlx", "yolo26m.npz")
model = YOLO(weights)

results = model.predict("https://ultralytics.com/images/bus.jpg", conf=0.25)
results[0].save()

Specs

Variant mAP@0.5:0.95 FPS (M4 Pro) Best for
yolo26m 52.3% 55 Higher accuracy, still fast

Other variants in this family: yolo26n-mlx · yolo26s-mlx · yolo26l-mlx · yolo26x-mlx

Requirements

  • Apple Silicon Mac (M1, M2, M3, or M4)
  • macOS 14.0+
  • Python 3.10+

Intel Macs are not supported — the whole point of MLX is Apple Silicon native acceleration.

What's in this repo

File Description
yolo26m.npz MLX-format weights, converted from the YOLO26m .pt checkpoint and verified shape-by-shape against the source.
README.md This card.

Training data

Pretrained on COCO (80 classes). For domain-specific use cases, fine-tune on your own data — see the training guide in the upstream repo.

License

AGPL-3.0, inherited from upstream thewebAI/yolo-mlx. Free to use, fork, modify, and ship for personal projects, research, and prototypes. If you deploy this as a hosted service for real users, AGPL requires you to publish your source under the same license.

About webAI

webAI builds the sovereign AI platform — AI that runs on your infrastructure, stays under your control, and compounds with your knowledge. Every release here reflects a simple belief: open models, owned locally, coordinated intelligently, compound into something no centralized system can match.

🌐 webai.com · 💬 community.webai.com

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results