YOLO26x (MLX)

Pure-MLX weights for YOLO26x, ready to run on Apple Silicon with yolo-mlx. No PyTorch at runtime, no cloud calls, no waiting on someone else's API — everything stays on your Mac.

This is the largest, most accurate variant in the YOLO26 MLX family. Use it when accuracy is paramount and you can budget the extra compute per frame.

Quickstart

pip install yolo-mlx huggingface_hub
from huggingface_hub import hf_hub_download
from yolo26mlx import YOLO

weights = hf_hub_download("webAI-Official/yolo26x-mlx", "yolo26x.npz")
model = YOLO(weights)

results = model.predict("https://ultralytics.com/images/bus.jpg", conf=0.25)
results[0].save()

Specs

Variant mAP@0.5:0.95 FPS (M4 Pro) Best for
yolo26x 56.7% 24 Max accuracy, slower

Other variants in this family: yolo26n-mlx · yolo26s-mlx · yolo26m-mlx · yolo26l-mlx

Requirements

  • Apple Silicon Mac (M1, M2, M3, or M4)
  • macOS 14.0+
  • Python 3.10+

Intel Macs are not supported — the whole point of MLX is Apple Silicon native acceleration.

What's in this repo

File Description
yolo26x.npz MLX-format weights, converted from the YOLO26x .pt checkpoint and verified shape-by-shape against the source.
README.md This card.

Training data

Pretrained on COCO (80 classes). For domain-specific use cases, fine-tune on your own data — see the training guide in the upstream repo.

License

AGPL-3.0, inherited from upstream thewebAI/yolo-mlx. Free to use, fork, modify, and ship for personal projects, research, and prototypes. If you deploy this as a hosted service for real users, AGPL requires you to publish your source under the same license.

About webAI

webAI builds the sovereign AI platform — AI that runs on your infrastructure, stays under your control, and compounds with your knowledge. Every release here reflects a simple belief: open models, owned locally, coordinated intelligently, compound into something no centralized system can match.

🌐 webai.com · 💬 community.webai.com

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results