Update content to reflect `https://github.com/ultralytics/ultralytics/blob/main/README.md`
#1
by
lakshanthad
- opened
README.md
CHANGED
|
@@ -52,11 +52,11 @@ model-index:
|
|
| 52 |
</div>
|
| 53 |
</div>
|
| 54 |
|
| 55 |
-
[Ultralytics](https://www.ultralytics.com/)
|
| 56 |
|
| 57 |
-
|
| 58 |
|
| 59 |
-
|
| 60 |
|
| 61 |
<a href="https://platform.ultralytics.com/ultralytics/yolo26" target="_blank">
|
| 62 |
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="YOLO26 performance plots">
|
|
@@ -72,14 +72,14 @@ To request an Enterprise License please complete the form at [Ultralytics Licens
|
|
| 72 |
</div>
|
| 73 |
</div>
|
| 74 |
|
| 75 |
-
## <div align="center"
|
| 76 |
|
| 77 |
-
See below for
|
| 78 |
|
| 79 |
<details open>
|
| 80 |
<summary>Install</summary>
|
| 81 |
|
| 82 |
-
|
| 83 |
|
| 84 |
<div align="left" style="display: flex; flex-wrap: wrap; justify-content: left; align-items: center; gap: 5px;">
|
| 85 |
<a href="https://pypi.org/project/ultralytics/"><img src="https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white" alt="PyPI - Version"></a>
|
|
@@ -91,7 +91,7 @@ Pip install the ultralytics package including all [requirements](https://github.
|
|
| 91 |
pip install ultralytics
|
| 92 |
```
|
| 93 |
|
| 94 |
-
For alternative installation methods including [Conda](https://anaconda.org/conda-forge/ultralytics), [Docker](https://hub.docker.com/r/ultralytics/ultralytics), and Git, please
|
| 95 |
|
| 96 |
<div align="left" style="display: flex; flex-wrap: wrap; justify-content: left; align-items: center; gap: 5px;">
|
| 97 |
<a href="https://anaconda.org/conda-forge/ultralytics"><img src="https://img.shields.io/conda/vn/conda-forge/ultralytics?logo=condaforge" alt="Conda Version"></a>
|
|
@@ -106,53 +106,57 @@ For alternative installation methods including [Conda](https://anaconda.org/cond
|
|
| 106 |
|
| 107 |
### CLI
|
| 108 |
|
| 109 |
-
|
| 110 |
|
| 111 |
```bash
|
|
|
|
| 112 |
yolo predict model=yolo26n.pt source='https://ultralytics.com/images/bus.jpg'
|
| 113 |
```
|
| 114 |
|
| 115 |
-
`yolo`
|
| 116 |
|
| 117 |
### Python
|
| 118 |
|
| 119 |
-
Ultralytics YOLO
|
| 120 |
|
| 121 |
```python
|
| 122 |
from ultralytics import YOLO
|
| 123 |
-
|
|
|
|
| 124 |
model = YOLO("yolo26n.pt")
|
| 125 |
-
|
|
|
|
| 126 |
train_results = model.train(
|
| 127 |
-
data="coco8.yaml", #
|
| 128 |
-
epochs=100, #
|
| 129 |
-
imgsz=640, #
|
| 130 |
-
device="cpu", #
|
| 131 |
)
|
| 132 |
-
|
|
|
|
| 133 |
metrics = model.val()
|
|
|
|
| 134 |
# Perform object detection on an image
|
| 135 |
-
results = model("path/to/image.jpg")
|
| 136 |
-
results[0].show()
|
| 137 |
-
|
| 138 |
-
|
|
|
|
| 139 |
```
|
| 140 |
|
| 141 |
-
|
| 142 |
|
| 143 |
</details>
|
| 144 |
|
| 145 |
-
## <div align="center"
|
| 146 |
|
| 147 |
-
Ultralytics
|
| 148 |
|
| 149 |
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png" alt="Ultralytics YOLO supported tasks">
|
| 150 |
|
| 151 |
-
All [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
|
| 152 |
-
|
| 153 |
<details open><summary>Detection (COCO)</summary>
|
| 154 |
|
| 155 |
-
|
| 156 |
|
| 157 |
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50-95(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 158 |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
@@ -162,14 +166,14 @@ See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examp
|
|
| 162 |
| [YOLO26l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l.pt) | 640 | 55.0 | 54.4 | 286.2 ± 2.0 | 6.2 ± 0.2 | 24.8 | 86.4 |
|
| 163 |
| [YOLO26x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x.pt) | 640 | 57.5 | 56.9 | 525.8 ± 4.0 | 11.8 ± 0.2 | 55.7 | 193.9 |
|
| 164 |
|
| 165 |
-
- **mAP<sup>val</sup>** values
|
| 166 |
-
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce
|
| 167 |
|
| 168 |
</details>
|
| 169 |
|
| 170 |
<details><summary>Segmentation (COCO)</summary>
|
| 171 |
|
| 172 |
-
|
| 173 |
|
| 174 |
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95(e2e) | mAP<sup>mask<br>50-95(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 175 |
| -------------------------------------------------------------------------------------------- | --------------------- | ------------------------- | -------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
@@ -179,14 +183,14 @@ See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage e
|
|
| 179 |
| [YOLO26l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-seg.pt) | 640 | 54.4 | 45.5 | 387.0 ± 3.7 | 8.0 ± 0.1 | 28.0 | 139.8 |
|
| 180 |
| [YOLO26x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-seg.pt) | 640 | 56.5 | 47.0 | 787.0 ± 6.8 | 16.4 ± 0.1 | 62.8 | 313.5 |
|
| 181 |
|
| 182 |
-
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](https://cocodataset.org/) dataset. <br>Reproduce
|
| 183 |
-
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce
|
| 184 |
|
| 185 |
</details>
|
| 186 |
|
| 187 |
<details><summary>Classification (ImageNet)</summary>
|
| 188 |
|
| 189 |
-
|
| 190 |
|
| 191 |
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 224 |
|
| 192 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
|
|
@@ -196,14 +200,14 @@ See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usag
|
|
| 196 |
| [YOLO26l-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-cls.pt) | 224 | 79.0 | 94.6 | 23.2 ± 0.3 | 2.8 ± 0.0 | 14.1 | 6.2 |
|
| 197 |
| [YOLO26x-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-cls.pt) | 224 | 79.9 | 95.0 | 41.4 ± 0.9 | 3.8 ± 0.0 | 29.6 | 13.6 |
|
| 198 |
|
| 199 |
-
- **acc** values
|
| 200 |
-
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce
|
| 201 |
|
| 202 |
</details>
|
| 203 |
|
| 204 |
<details><summary>Pose (COCO)</summary>
|
| 205 |
|
| 206 |
-
See [Pose Docs](https://docs.ultralytics.com/tasks/pose/) for usage examples
|
| 207 |
|
| 208 |
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95(e2e) | mAP<sup>pose<br>50(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 209 |
| ---------------------------------------------------------------------------------------------- | --------------------- | -------------------------- | ----------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
@@ -213,14 +217,14 @@ See [Pose Docs](https://docs.ultralytics.com/tasks/pose/) for usage examples wit
|
|
| 213 |
| [YOLO26l-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-pose.pt) | 640 | 70.4 | 90.5 | 275.4 ± 2.4 | 6.5 ± 0.1 | 25.9 | 91.3 |
|
| 214 |
| [YOLO26x-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-pose.pt) | 640 | 71.6 | 91.6 | 565.4 ± 3.0 | 12.2 ± 0.2 | 57.6 | 201.7 |
|
| 215 |
|
| 216 |
-
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO Keypoints val2017](https://
|
| 217 |
-
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce
|
| 218 |
|
| 219 |
</details>
|
| 220 |
|
| 221 |
-
<details><summary>
|
| 222 |
|
| 223 |
-
|
| 224 |
|
| 225 |
| Model | size<br><sup>(pixels) | mAP<sup>test<br>50-95(e2e) | mAP<sup>test<br>50(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 226 |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------------- | ----------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
@@ -230,12 +234,12 @@ See [OBB Docs](https://docs.ultralytics.com/tasks/obb/) for usage examples with
|
|
| 230 |
| [YOLO26l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-obb.pt) | 1024 | 56.2 | 81.6 | 735.6 ± 3.1 | 13.0 ± 0.2 | 25.6 | 230.0 |
|
| 231 |
| [YOLO26x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-obb.pt) | 1024 | 56.7 | 81.7 | 1485.7 ± 11.5 | 30.5 ± 0.9 | 57.6 | 516.5 |
|
| 232 |
|
| 233 |
-
- **mAP<sup>test</sup>** values are for single-model multiscale on [DOTAv1](https://captain-whu.github.io/DOTA/
|
| 234 |
-
- **Speed** averaged over DOTAv1 val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`
|
| 235 |
|
| 236 |
</details>
|
| 237 |
|
| 238 |
-
## <div align="center"
|
| 239 |
|
| 240 |
Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with partners like [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/), [Comet ML](https://docs.ultralytics.com/integrations/comet/), [Roboflow](https://docs.ultralytics.com/integrations/roboflow/), and [Intel OpenVINO](https://docs.ultralytics.com/integrations/openvino/), can optimize your AI workflow. Explore more at [Ultralytics Integrations](https://docs.ultralytics.com/integrations/).
|
| 241 |
|
|
@@ -249,25 +253,27 @@ Our key integrations with leading AI platforms extend the functionality of Ultra
|
|
| 249 |
| :---: | :---: | :---: | :---: |
|
| 250 |
| Streamline Ultralytics YOLO workflows: Label, train, and deploy effortlessly with [Ultralytics Platform](https://platform.ultralytics.com/ultralytics/yolo26). Try now! | Track experiments, hyperparameters, and results with [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/). | Free forever, [Comet ML](https://docs.ultralytics.com/integrations/comet/) lets you save Ultralytics YOLO models, resume training, and interactively visualize predictions. | Run Ultralytics YOLO inference up to 6x faster with [Neural Magic DeepSparse](https://docs.ultralytics.com/integrations/neural-magic/). |
|
| 251 |
|
| 252 |
-
## <div align="center"
|
| 253 |
|
| 254 |
-
We
|
| 255 |
|
| 256 |
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
|
| 257 |
|
| 258 |
<a href="https://github.com/ultralytics/ultralytics/graphs/contributors">
|
| 259 |
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" alt="Ultralytics open-source contributors"></a>
|
| 260 |
|
| 261 |
-
|
|
|
|
|
|
|
| 262 |
|
| 263 |
-
Ultralytics offers two licensing options to
|
| 264 |
|
| 265 |
-
- **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license) open-source license is
|
| 266 |
-
- **Enterprise License**: Designed for commercial use, this license
|
| 267 |
|
| 268 |
-
## <div align="center"
|
| 269 |
|
| 270 |
-
For
|
| 271 |
|
| 272 |
<br>
|
| 273 |
<div align="center" style="display: flex; flex-wrap: wrap; justify-content: center; align-items: center; gap: 20px;">
|
|
|
|
| 52 |
</div>
|
| 53 |
</div>
|
| 54 |
|
| 55 |
+
[Ultralytics](https://www.ultralytics.com/) creates cutting-edge, state-of-the-art (SOTA) [YOLO models](https://www.ultralytics.com/yolo) built on years of foundational research in computer vision and AI. Constantly updated for performance and flexibility, our models are **fast**, **accurate**, and **easy to use**. They excel at [object detection](https://docs.ultralytics.com/tasks/detect/), [tracking](https://docs.ultralytics.com/modes/track/), [instance segmentation](https://docs.ultralytics.com/tasks/segment/), [image classification](https://docs.ultralytics.com/tasks/classify/), and [pose estimation](https://docs.ultralytics.com/tasks/pose/) tasks.
|
| 56 |
|
| 57 |
+
Find detailed documentation in the [Ultralytics Docs](https://docs.ultralytics.com/). Get support via [GitHub Issues](https://github.com/ultralytics/ultralytics/issues/new/choose). Join discussions on [Discord](https://discord.com/invite/ultralytics), [Reddit](https://www.reddit.com/r/ultralytics/), and the [Ultralytics Community Forums](https://community.ultralytics.com/)!
|
| 58 |
|
| 59 |
+
Request an Enterprise License for commercial use at [Ultralytics Licensing](https://www.ultralytics.com/license).
|
| 60 |
|
| 61 |
<a href="https://platform.ultralytics.com/ultralytics/yolo26" target="_blank">
|
| 62 |
<img width="100%" src="https://raw.githubusercontent.com/ultralytics/assets/refs/heads/main/yolo/performance-comparison.png" alt="YOLO26 performance plots">
|
|
|
|
| 72 |
</div>
|
| 73 |
</div>
|
| 74 |
|
| 75 |
+
## <div align="center">📄 Documentation</div>
|
| 76 |
|
| 77 |
+
See below for quickstart installation and usage examples. For comprehensive guidance on training, validation, prediction, and deployment, refer to our full [Ultralytics Docs](https://docs.ultralytics.com/).
|
| 78 |
|
| 79 |
<details open>
|
| 80 |
<summary>Install</summary>
|
| 81 |
|
| 82 |
+
Install the `ultralytics` package, including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/pyproject.toml), in a [**Python>=3.8**](https://www.python.org/) environment with [**PyTorch>=1.8**](https://pytorch.org/get-started/locally/).
|
| 83 |
|
| 84 |
<div align="left" style="display: flex; flex-wrap: wrap; justify-content: left; align-items: center; gap: 5px;">
|
| 85 |
<a href="https://pypi.org/project/ultralytics/"><img src="https://img.shields.io/pypi/v/ultralytics?logo=pypi&logoColor=white" alt="PyPI - Version"></a>
|
|
|
|
| 91 |
pip install ultralytics
|
| 92 |
```
|
| 93 |
|
| 94 |
+
For alternative installation methods, including [Conda](https://anaconda.org/conda-forge/ultralytics), [Docker](https://hub.docker.com/r/ultralytics/ultralytics), and building from source via Git, please consult the [Quickstart Guide](https://docs.ultralytics.com/quickstart/).
|
| 95 |
|
| 96 |
<div align="left" style="display: flex; flex-wrap: wrap; justify-content: left; align-items: center; gap: 5px;">
|
| 97 |
<a href="https://anaconda.org/conda-forge/ultralytics"><img src="https://img.shields.io/conda/vn/conda-forge/ultralytics?logo=condaforge" alt="Conda Version"></a>
|
|
|
|
| 106 |
|
| 107 |
### CLI
|
| 108 |
|
| 109 |
+
You can use Ultralytics YOLO directly from the Command Line Interface (CLI) with the `yolo` command:
|
| 110 |
|
| 111 |
```bash
|
| 112 |
+
# Predict using a pretrained YOLO model (e.g., YOLO26n) on an image
|
| 113 |
yolo predict model=yolo26n.pt source='https://ultralytics.com/images/bus.jpg'
|
| 114 |
```
|
| 115 |
|
| 116 |
+
The `yolo` command supports various tasks and modes, accepting additional arguments like `imgsz=640`. Explore the YOLO [CLI Docs](https://docs.ultralytics.com/usage/cli/) for more examples.
|
| 117 |
|
| 118 |
### Python
|
| 119 |
|
| 120 |
+
Ultralytics YOLO can also be integrated directly into your Python projects. It accepts the same [configuration arguments](https://docs.ultralytics.com/usage/cfg/) as the CLI:
|
| 121 |
|
| 122 |
```python
|
| 123 |
from ultralytics import YOLO
|
| 124 |
+
|
| 125 |
+
# Load a pretrained YOLO26n model
|
| 126 |
model = YOLO("yolo26n.pt")
|
| 127 |
+
|
| 128 |
+
# Train the model on the COCO8 dataset for 100 epochs
|
| 129 |
train_results = model.train(
|
| 130 |
+
data="coco8.yaml", # Path to dataset configuration file
|
| 131 |
+
epochs=100, # Number of training epochs
|
| 132 |
+
imgsz=640, # Image size for training
|
| 133 |
+
device="cpu", # Device to run on (e.g., 'cpu', 0, [0,1,2,3])
|
| 134 |
)
|
| 135 |
+
|
| 136 |
+
# Evaluate the model's performance on the validation set
|
| 137 |
metrics = model.val()
|
| 138 |
+
|
| 139 |
# Perform object detection on an image
|
| 140 |
+
results = model("path/to/image.jpg") # Predict on an image
|
| 141 |
+
results[0].show() # Display results
|
| 142 |
+
|
| 143 |
+
# Export the model to ONNX format for deployment
|
| 144 |
+
path = model.export(format="onnx") # Returns the path to the exported model
|
| 145 |
```
|
| 146 |
|
| 147 |
+
Discover more examples in the YOLO [Python Docs](https://docs.ultralytics.com/usage/python/).
|
| 148 |
|
| 149 |
</details>
|
| 150 |
|
| 151 |
+
## <div align="center">✨ Models</div>
|
| 152 |
|
| 153 |
+
Ultralytics supports a wide range of YOLO models, from early versions like [YOLOv3](https://docs.ultralytics.com/models/yolov3/) to the latest [YOLO26](https://docs.ultralytics.com/models/yolo26/). The tables below showcase YOLO26 models pretrained on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset for [Detection](https://docs.ultralytics.com/tasks/detect/), [Segmentation](https://docs.ultralytics.com/tasks/segment/), and [Pose Estimation](https://docs.ultralytics.com/tasks/pose/). Additionally, [Classification](https://docs.ultralytics.com/tasks/classify/) models pretrained on the [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/) dataset are available. [Tracking](https://docs.ultralytics.com/modes/track/) mode is compatible with all Detection, Segmentation, and Pose models. All [Models](https://docs.ultralytics.com/models/) are automatically downloaded from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) upon first use.
|
| 154 |
|
| 155 |
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png" alt="Ultralytics YOLO supported tasks">
|
| 156 |
|
|
|
|
|
|
|
| 157 |
<details open><summary>Detection (COCO)</summary>
|
| 158 |
|
| 159 |
+
Explore the [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples. These models are trained on the [COCO dataset](https://cocodataset.org/), featuring 80 object classes.
|
| 160 |
|
| 161 |
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | mAP<sup>val<br>50-95(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 162 |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
|
|
| 166 |
| [YOLO26l](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l.pt) | 640 | 55.0 | 54.4 | 286.2 ± 2.0 | 6.2 ± 0.2 | 24.8 | 86.4 |
|
| 167 |
| [YOLO26x](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x.pt) | 640 | 57.5 | 56.9 | 525.8 ± 4.0 | 11.8 ± 0.2 | 55.7 | 193.9 |
|
| 168 |
|
| 169 |
+
- **mAP<sup>val</sup>** values refer to single-model single-scale performance on the [COCO val2017](https://cocodataset.org/) dataset. See [YOLO Performance Metrics](https://docs.ultralytics.com/guides/yolo-performance-metrics/) for details. <br>Reproduce with `yolo val detect data=coco.yaml device=0`
|
| 170 |
+
- **Speed** metrics are averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. CPU speeds measured with [ONNX](https://onnx.ai/) export. GPU speeds measured with [TensorRT](https://developer.nvidia.com/tensorrt) export. <br>Reproduce with `yolo val detect data=coco.yaml batch=1 device=0|cpu`
|
| 171 |
|
| 172 |
</details>
|
| 173 |
|
| 174 |
<details><summary>Segmentation (COCO)</summary>
|
| 175 |
|
| 176 |
+
Refer to the [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage examples. These models are trained on [COCO-Seg](https://docs.ultralytics.com/datasets/segment/coco/), including 80 classes.
|
| 177 |
|
| 178 |
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95(e2e) | mAP<sup>mask<br>50-95(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 179 |
| -------------------------------------------------------------------------------------------- | --------------------- | ------------------------- | -------------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
|
|
| 183 |
| [YOLO26l-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-seg.pt) | 640 | 54.4 | 45.5 | 387.0 ± 3.7 | 8.0 ± 0.1 | 28.0 | 139.8 |
|
| 184 |
| [YOLO26x-seg](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-seg.pt) | 640 | 56.5 | 47.0 | 787.0 ± 6.8 | 16.4 ± 0.1 | 62.8 | 313.5 |
|
| 185 |
|
| 186 |
+
- **mAP<sup>val</sup>** values are for single-model single-scale on the [COCO val2017](https://cocodataset.org/) dataset. See [YOLO Performance Metrics](https://docs.ultralytics.com/guides/yolo-performance-metrics/) for details. <br>Reproduce with `yolo val segment data=coco.yaml device=0`
|
| 187 |
+
- **Speed** metrics are averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. CPU speeds measured with [ONNX](https://onnx.ai/) export. GPU speeds measured with [TensorRT](https://developer.nvidia.com/tensorrt) export. <br>Reproduce with `yolo val segment data=coco.yaml batch=1 device=0|cpu`
|
| 188 |
|
| 189 |
</details>
|
| 190 |
|
| 191 |
<details><summary>Classification (ImageNet)</summary>
|
| 192 |
|
| 193 |
+
Consult the [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usage examples. These models are trained on [ImageNet](https://docs.ultralytics.com/datasets/classify/imagenet/), covering 1000 classes.
|
| 194 |
|
| 195 |
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 224 |
|
| 196 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
|
|
|
|
| 200 |
| [YOLO26l-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-cls.pt) | 224 | 79.0 | 94.6 | 23.2 ± 0.3 | 2.8 ± 0.0 | 14.1 | 6.2 |
|
| 201 |
| [YOLO26x-cls](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-cls.pt) | 224 | 79.9 | 95.0 | 41.4 ± 0.9 | 3.8 ± 0.0 | 29.6 | 13.6 |
|
| 202 |
|
| 203 |
+
- **acc** values represent model accuracy on the [ImageNet](https://www.image-net.org/) dataset validation set. <br>Reproduce with `yolo val classify data=path/to/ImageNet device=0`
|
| 204 |
+
- **Speed** metrics are averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. CPU speeds measured with [ONNX](https://onnx.ai/) export. GPU speeds measured with [TensorRT](https://developer.nvidia.com/tensorrt) export. <br>Reproduce with `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
|
| 205 |
|
| 206 |
</details>
|
| 207 |
|
| 208 |
<details><summary>Pose (COCO)</summary>
|
| 209 |
|
| 210 |
+
See the [Pose Estimation Docs](https://docs.ultralytics.com/tasks/pose/) for usage examples. These models are trained on [COCO-Pose](https://docs.ultralytics.com/datasets/pose/coco/), focusing on the 'person' class.
|
| 211 |
|
| 212 |
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95(e2e) | mAP<sup>pose<br>50(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 213 |
| ---------------------------------------------------------------------------------------------- | --------------------- | -------------------------- | ----------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
|
|
| 217 |
| [YOLO26l-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-pose.pt) | 640 | 70.4 | 90.5 | 275.4 ± 2.4 | 6.5 ± 0.1 | 25.9 | 91.3 |
|
| 218 |
| [YOLO26x-pose](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-pose.pt) | 640 | 71.6 | 91.6 | 565.4 ± 3.0 | 12.2 ± 0.2 | 57.6 | 201.7 |
|
| 219 |
|
| 220 |
+
- **mAP<sup>val</sup>** values are for single-model single-scale on the [COCO Keypoints val2017](https://docs.ultralytics.com/datasets/pose/coco/) dataset. See [YOLO Performance Metrics](https://docs.ultralytics.com/guides/yolo-performance-metrics/) for details. <br>Reproduce with `yolo val pose data=coco-pose.yaml device=0`
|
| 221 |
+
- **Speed** metrics are averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. CPU speeds measured with [ONNX](https://onnx.ai/) export. GPU speeds measured with [TensorRT](https://developer.nvidia.com/tensorrt) export. <br>Reproduce with `yolo val pose data=coco-pose.yaml batch=1 device=0|cpu`
|
| 222 |
|
| 223 |
</details>
|
| 224 |
|
| 225 |
+
<details><summary>Oriented Bounding Boxes (DOTAv1)</summary>
|
| 226 |
|
| 227 |
+
Check the [OBB Docs](https://docs.ultralytics.com/tasks/obb/) for usage examples. These models are trained on [DOTAv1](https://docs.ultralytics.com/datasets/obb/dota-v2/#dota-v10/), including 15 classes.
|
| 228 |
|
| 229 |
| Model | size<br><sup>(pixels) | mAP<sup>test<br>50-95(e2e) | mAP<sup>test<br>50(e2e) | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>T4 TensorRT10<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
| 230 |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------------- | ----------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
|
|
|
| 234 |
| [YOLO26l-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26l-obb.pt) | 1024 | 56.2 | 81.6 | 735.6 ± 3.1 | 13.0 ± 0.2 | 25.6 | 230.0 |
|
| 235 |
| [YOLO26x-obb](https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo26x-obb.pt) | 1024 | 56.7 | 81.7 | 1485.7 ± 11.5 | 30.5 ± 0.9 | 57.6 | 516.5 |
|
| 236 |
|
| 237 |
+
- **mAP<sup>test</sup>** values are for single-model multiscale performance on the [DOTAv1 test set](https://captain-whu.github.io/DOTA/dataset.html). <br>Reproduce by `yolo val obb data=DOTAv1.yaml device=0 split=test` and submit merged results to the [DOTA evaluation server](https://captain-whu.github.io/DOTA/evaluation.html).
|
| 238 |
+
- **Speed** metrics are averaged over [DOTAv1 val images](https://docs.ultralytics.com/datasets/obb/dota-v2/#dota-v10) using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. CPU speeds measured with [ONNX](https://onnx.ai/) export. GPU speeds measured with [TensorRT](https://developer.nvidia.com/tensorrt) export. <br>Reproduce by `yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu`
|
| 239 |
|
| 240 |
</details>
|
| 241 |
|
| 242 |
+
## <div align="center">🧩 Integrations</div>
|
| 243 |
|
| 244 |
Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with partners like [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/), [Comet ML](https://docs.ultralytics.com/integrations/comet/), [Roboflow](https://docs.ultralytics.com/integrations/roboflow/), and [Intel OpenVINO](https://docs.ultralytics.com/integrations/openvino/), can optimize your AI workflow. Explore more at [Ultralytics Integrations](https://docs.ultralytics.com/integrations/).
|
| 245 |
|
|
|
|
| 253 |
| :---: | :---: | :---: | :---: |
|
| 254 |
| Streamline Ultralytics YOLO workflows: Label, train, and deploy effortlessly with [Ultralytics Platform](https://platform.ultralytics.com/ultralytics/yolo26). Try now! | Track experiments, hyperparameters, and results with [Weights & Biases](https://docs.ultralytics.com/integrations/weights-biases/). | Free forever, [Comet ML](https://docs.ultralytics.com/integrations/comet/) lets you save Ultralytics YOLO models, resume training, and interactively visualize predictions. | Run Ultralytics YOLO inference up to 6x faster with [Neural Magic DeepSparse](https://docs.ultralytics.com/integrations/neural-magic/). |
|
| 255 |
|
| 256 |
+
## <div align="center">🤝 Contribute</div>
|
| 257 |
|
| 258 |
+
We thrive on community collaboration! Ultralytics YOLO wouldn't be the SOTA framework it is without contributions from developers like you. Please see our [Contributing Guide](https://docs.ultralytics.com/help/contributing/) to get started. We also welcome your feedback—share your experience by completing our [Survey](https://www.ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey). A huge **Thank You** 🙏 to everyone who contributes!
|
| 259 |
|
| 260 |
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
|
| 261 |
|
| 262 |
<a href="https://github.com/ultralytics/ultralytics/graphs/contributors">
|
| 263 |
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png" alt="Ultralytics open-source contributors"></a>
|
| 264 |
|
| 265 |
+
We look forward to your contributions to help make the Ultralytics ecosystem even better!
|
| 266 |
+
|
| 267 |
+
## <div align="center">📜 License</div>
|
| 268 |
|
| 269 |
+
Ultralytics offers two licensing options to suit different needs:
|
| 270 |
|
| 271 |
+
- **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license/agpl-v3) open-source license is perfect for students, researchers, and enthusiasts. It encourages open collaboration and knowledge sharing. See the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for full details.
|
| 272 |
+
- **Ultralytics Enterprise License**: Designed for commercial use, this license allows for the seamless integration of Ultralytics software and AI models into commercial products and services, bypassing the open-source requirements of AGPL-3.0. If your use case involves commercial deployment, please contact us via [Ultralytics Licensing](https://www.ultralytics.com/license).
|
| 273 |
|
| 274 |
+
## <div align="center">📞 Contact</div>
|
| 275 |
|
| 276 |
+
For bug reports and feature requests related to Ultralytics software, please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For questions, discussions, and community support, join our active communities on [Discord](https://discord.com/invite/ultralytics), [Reddit](https://www.reddit.com/r/ultralytics/), and the [Ultralytics Community Forums](https://community.ultralytics.com/). We're here to help with all things Ultralytics!
|
| 277 |
|
| 278 |
<br>
|
| 279 |
<div align="center" style="display: flex; flex-wrap: wrap; justify-content: center; align-items: center; gap: 20px;">
|