v0.46.1
Browse filesSee https://github.com/quic/ai-hub-models/releases/v0.46.1 for changelog.
README.md
CHANGED
|
@@ -10,261 +10,90 @@ pipeline_tag: image-segmentation
|
|
| 10 |
|
| 11 |

|
| 12 |
|
| 13 |
-
# YOLOv8-Segmentation: Optimized for
|
| 14 |
-
## Real-time object segmentation optimized for mobile and edge by Ultralytics
|
| 15 |
-
|
| 16 |
|
| 17 |
Ultralytics YOLOv8 is a machine learning model that predicts bounding boxes, segmentation masks and classes of objects in an image.
|
| 18 |
|
| 19 |
-
This
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
|
| 48 |
-
|
| 49 |
-
| YOLOv8-Segmentation |
|
| 50 |
-
| YOLOv8-Segmentation |
|
| 51 |
-
| YOLOv8-Segmentation |
|
| 52 |
-
| YOLOv8-Segmentation |
|
| 53 |
-
| YOLOv8-Segmentation |
|
| 54 |
-
| YOLOv8-Segmentation |
|
| 55 |
-
| YOLOv8-Segmentation |
|
| 56 |
-
| YOLOv8-Segmentation |
|
| 57 |
-
| YOLOv8-Segmentation |
|
| 58 |
-
| YOLOv8-Segmentation |
|
| 59 |
-
| YOLOv8-Segmentation |
|
| 60 |
-
| YOLOv8-Segmentation |
|
| 61 |
-
| YOLOv8-Segmentation |
|
| 62 |
-
| YOLOv8-Segmentation |
|
| 63 |
-
| YOLOv8-Segmentation |
|
| 64 |
-
| YOLOv8-Segmentation |
|
| 65 |
-
| YOLOv8-Segmentation |
|
| 66 |
-
| YOLOv8-Segmentation |
|
| 67 |
-
| YOLOv8-Segmentation |
|
| 68 |
-
| YOLOv8-Segmentation |
|
| 69 |
-
| YOLOv8-Segmentation |
|
| 70 |
-
| YOLOv8-Segmentation |
|
| 71 |
-
| YOLOv8-Segmentation |
|
| 72 |
-
| YOLOv8-Segmentation |
|
| 73 |
-
| YOLOv8-Segmentation |
|
| 74 |
-
| YOLOv8-Segmentation |
|
| 75 |
-
| YOLOv8-Segmentation |
|
| 76 |
-
| YOLOv8-Segmentation |
|
| 77 |
-
| YOLOv8-Segmentation |
|
| 78 |
-
| YOLOv8-Segmentation |
|
| 79 |
-
| YOLOv8-Segmentation |
|
| 80 |
-
| YOLOv8-Segmentation |
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
# NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported.
|
| 91 |
-
pip install "qai-hub-models[yolov8-seg]"
|
| 92 |
-
```
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
## Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device
|
| 96 |
-
|
| 97 |
-
Sign-in to [Qualcomm® AI Hub Workbench](https://workbench.aihub.qualcomm.com/) with your
|
| 98 |
-
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
|
| 99 |
-
|
| 100 |
-
With this API token, you can configure your client to run models on the cloud
|
| 101 |
-
hosted devices.
|
| 102 |
-
```bash
|
| 103 |
-
qai-hub configure --api_token API_TOKEN
|
| 104 |
-
```
|
| 105 |
-
Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information.
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
## Demo off target
|
| 110 |
-
|
| 111 |
-
The package contains a simple end-to-end demo that downloads pre-trained
|
| 112 |
-
weights and runs this model on a sample input.
|
| 113 |
-
|
| 114 |
-
```bash
|
| 115 |
-
python -m qai_hub_models.models.yolov8_seg.demo
|
| 116 |
-
```
|
| 117 |
-
|
| 118 |
-
The above demo runs a reference implementation of pre-processing, model
|
| 119 |
-
inference, and post processing.
|
| 120 |
-
|
| 121 |
-
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
|
| 122 |
-
environment, please add the following to your cell (instead of the above).
|
| 123 |
-
```
|
| 124 |
-
%run -m qai_hub_models.models.yolov8_seg.demo
|
| 125 |
-
```
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
### Run model on a cloud-hosted device
|
| 129 |
-
|
| 130 |
-
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
|
| 131 |
-
device. This script does the following:
|
| 132 |
-
* Performance check on-device on a cloud-hosted device
|
| 133 |
-
* Downloads compiled assets that can be deployed on-device for Android.
|
| 134 |
-
* Accuracy check between PyTorch and on-device outputs.
|
| 135 |
-
|
| 136 |
-
```bash
|
| 137 |
-
python -m qai_hub_models.models.yolov8_seg.export
|
| 138 |
-
```
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
## How does this work?
|
| 143 |
-
|
| 144 |
-
This [export script](https://aihub.qualcomm.com/models/yolov8_seg/qai_hub_models/models/YOLOv8-Segmentation/export.py)
|
| 145 |
-
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
|
| 146 |
-
on-device. Lets go through each step below in detail:
|
| 147 |
-
|
| 148 |
-
Step 1: **Compile model for on-device deployment**
|
| 149 |
-
|
| 150 |
-
To compile a PyTorch model for on-device deployment, we first trace the model
|
| 151 |
-
in memory using the `jit.trace` and then call the `submit_compile_job` API.
|
| 152 |
-
|
| 153 |
-
```python
|
| 154 |
-
import torch
|
| 155 |
-
|
| 156 |
-
import qai_hub as hub
|
| 157 |
-
from qai_hub_models.models.yolov8_seg import Model
|
| 158 |
-
|
| 159 |
-
# Load the model
|
| 160 |
-
torch_model = Model.from_pretrained()
|
| 161 |
-
|
| 162 |
-
# Device
|
| 163 |
-
device = hub.Device("Samsung Galaxy S25")
|
| 164 |
-
|
| 165 |
-
# Trace model
|
| 166 |
-
input_shape = torch_model.get_input_spec()
|
| 167 |
-
sample_inputs = torch_model.sample_inputs()
|
| 168 |
-
|
| 169 |
-
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
|
| 170 |
-
|
| 171 |
-
# Compile model on a specific device
|
| 172 |
-
compile_job = hub.submit_compile_job(
|
| 173 |
-
model=pt_model,
|
| 174 |
-
device=device,
|
| 175 |
-
input_specs=torch_model.get_input_spec(),
|
| 176 |
-
)
|
| 177 |
-
|
| 178 |
-
# Get target model to run on-device
|
| 179 |
-
target_model = compile_job.get_target_model()
|
| 180 |
-
|
| 181 |
-
```
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
Step 2: **Performance profiling on cloud-hosted device**
|
| 185 |
-
|
| 186 |
-
After compiling models from step 1. Models can be profiled model on-device using the
|
| 187 |
-
`target_model`. Note that this scripts runs the model on a device automatically
|
| 188 |
-
provisioned in the cloud. Once the job is submitted, you can navigate to a
|
| 189 |
-
provided job URL to view a variety of on-device performance metrics.
|
| 190 |
-
```python
|
| 191 |
-
profile_job = hub.submit_profile_job(
|
| 192 |
-
model=target_model,
|
| 193 |
-
device=device,
|
| 194 |
-
)
|
| 195 |
-
|
| 196 |
-
```
|
| 197 |
-
|
| 198 |
-
Step 3: **Verify on-device accuracy**
|
| 199 |
-
|
| 200 |
-
To verify the accuracy of the model on-device, you can run on-device inference
|
| 201 |
-
on sample input data on the same cloud hosted device.
|
| 202 |
-
```python
|
| 203 |
-
input_data = torch_model.sample_inputs()
|
| 204 |
-
inference_job = hub.submit_inference_job(
|
| 205 |
-
model=target_model,
|
| 206 |
-
device=device,
|
| 207 |
-
inputs=input_data,
|
| 208 |
-
)
|
| 209 |
-
on_device_output = inference_job.download_output_data()
|
| 210 |
-
|
| 211 |
-
```
|
| 212 |
-
With the output of the model, you can compute like PSNR, relative errors or
|
| 213 |
-
spot check the output with expected output.
|
| 214 |
-
|
| 215 |
-
**Note**: This on-device profiling and inference requires access to Qualcomm®
|
| 216 |
-
AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
|
| 220 |
-
## Run demo on a cloud-hosted device
|
| 221 |
-
|
| 222 |
-
You can also run the demo on-device.
|
| 223 |
-
|
| 224 |
-
```bash
|
| 225 |
-
python -m qai_hub_models.models.yolov8_seg.demo --eval-mode on-device
|
| 226 |
-
```
|
| 227 |
-
|
| 228 |
-
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
|
| 229 |
-
environment, please add the following to your cell (instead of the above).
|
| 230 |
-
```
|
| 231 |
-
%run -m qai_hub_models.models.yolov8_seg.demo -- --eval-mode on-device
|
| 232 |
-
```
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
## Deploying compiled model to Android
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
The models can be deployed using multiple runtimes:
|
| 239 |
-
- TensorFlow Lite (`.tflite` export): [This
|
| 240 |
-
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
|
| 241 |
-
guide to deploy the .tflite model in an Android application.
|
| 242 |
-
|
| 243 |
-
|
| 244 |
-
- QNN (`.so` export ): This [sample
|
| 245 |
-
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
|
| 246 |
-
provides instructions on how to use the `.so` shared library in an Android application.
|
| 247 |
-
|
| 248 |
-
|
| 249 |
-
## View on Qualcomm® AI Hub
|
| 250 |
-
Get more details on YOLOv8-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/yolov8_seg).
|
| 251 |
-
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
| 252 |
-
|
| 253 |
|
| 254 |
## License
|
| 255 |
* The license for the original implementation of YOLOv8-Segmentation can be found
|
| 256 |
[here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
| 257 |
|
| 258 |
-
|
| 259 |
-
|
| 260 |
## References
|
| 261 |
* [Ultralytics YOLOv8 Docs: Instance Segmentation](https://docs.ultralytics.com/tasks/segment/)
|
| 262 |
* [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment)
|
| 263 |
|
| 264 |
-
|
| 265 |
-
|
| 266 |
## Community
|
| 267 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 268 |
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
| 269 |
-
|
| 270 |
-
|
|
|
|
| 10 |
|
| 11 |

|
| 12 |
|
| 13 |
+
# YOLOv8-Segmentation: Optimized for Qualcomm Devices
|
|
|
|
|
|
|
| 14 |
|
| 15 |
Ultralytics YOLOv8 is a machine learning model that predicts bounding boxes, segmentation masks and classes of objects in an image.
|
| 16 |
|
| 17 |
+
This is based on the implementation of YOLOv8-Segmentation found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment).
|
| 18 |
+
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/yolov8_seg) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
|
| 19 |
+
|
| 20 |
+
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
|
| 21 |
+
|
| 22 |
+
## Getting Started
|
| 23 |
+
Due to licensing restrictions, we cannot distribute pre-exported model assets for this model.
|
| 24 |
+
Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/yolov8_seg) Python library to compile and export the model with your own:
|
| 25 |
+
- Custom weights (e.g., fine-tuned checkpoints)
|
| 26 |
+
- Custom input shapes
|
| 27 |
+
- Target device and runtime configurations
|
| 28 |
+
|
| 29 |
+
See our repository for [YOLOv8-Segmentation on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/yolov8_seg) for usage instructions.
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
## Model Details
|
| 33 |
+
|
| 34 |
+
**Model Type:** Model_use_case.semantic_segmentation
|
| 35 |
+
|
| 36 |
+
**Model Stats:**
|
| 37 |
+
- Model checkpoint: YOLOv8N-Seg
|
| 38 |
+
- Input resolution: 640x640
|
| 39 |
+
- Number of output classes: 80
|
| 40 |
+
- Number of parameters: 3.43M
|
| 41 |
+
- Model size (float): 13.2 MB
|
| 42 |
+
- Model size (w8a16): 3.91 MB
|
| 43 |
+
|
| 44 |
+
## Performance Summary
|
| 45 |
+
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|
| 46 |
+
|---|---|---|---|---|---|---
|
| 47 |
+
| YOLOv8-Segmentation | ONNX | float | Snapdragon® X Elite | 6.013 ms | 17 - 17 MB | NPU
|
| 48 |
+
| YOLOv8-Segmentation | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 3.989 ms | 17 - 226 MB | NPU
|
| 49 |
+
| YOLOv8-Segmentation | ONNX | float | Qualcomm® QCS8550 (Proxy) | 5.897 ms | 0 - 43 MB | NPU
|
| 50 |
+
| YOLOv8-Segmentation | ONNX | float | Qualcomm® QCS9075 | 8.08 ms | 13 - 16 MB | NPU
|
| 51 |
+
| YOLOv8-Segmentation | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 3.285 ms | 0 - 155 MB | NPU
|
| 52 |
+
| YOLOv8-Segmentation | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 2.759 ms | 0 - 156 MB | NPU
|
| 53 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Snapdragon® X Elite | 4.912 ms | 5 - 5 MB | NPU
|
| 54 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 3.339 ms | 5 - 261 MB | NPU
|
| 55 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 16.912 ms | 1 - 200 MB | NPU
|
| 56 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 4.548 ms | 5 - 7 MB | NPU
|
| 57 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Qualcomm® SA8775P | 6.278 ms | 1 - 203 MB | NPU
|
| 58 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Qualcomm® QCS9075 | 6.029 ms | 5 - 15 MB | NPU
|
| 59 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 9.835 ms | 5 - 199 MB | NPU
|
| 60 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Qualcomm® SA7255P | 16.912 ms | 1 - 200 MB | NPU
|
| 61 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Qualcomm® SA8295P | 9.2 ms | 2 - 166 MB | NPU
|
| 62 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 2.648 ms | 0 - 198 MB | NPU
|
| 63 |
+
| YOLOv8-Segmentation | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 1.927 ms | 5 - 203 MB | NPU
|
| 64 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Snapdragon® X Elite | 4.296 ms | 2 - 2 MB | NPU
|
| 65 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Snapdragon® 8 Gen 3 Mobile | 2.574 ms | 2 - 84 MB | NPU
|
| 66 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® QCS6490 | 11.362 ms | 3 - 9 MB | NPU
|
| 67 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® QCS8275 (Proxy) | 7.868 ms | 0 - 57 MB | NPU
|
| 68 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® QCS8550 (Proxy) | 3.839 ms | 2 - 4 MB | NPU
|
| 69 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® SA8775P | 4.516 ms | 0 - 61 MB | NPU
|
| 70 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® QCS9075 | 4.414 ms | 1 - 7 MB | NPU
|
| 71 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® QCM6690 | 27.628 ms | 2 - 179 MB | NPU
|
| 72 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® QCS8450 (Proxy) | 4.796 ms | 2 - 82 MB | NPU
|
| 73 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® SA7255P | 7.868 ms | 0 - 57 MB | NPU
|
| 74 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Qualcomm® SA8295P | 5.183 ms | 1 - 57 MB | NPU
|
| 75 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 1.787 ms | 2 - 64 MB | NPU
|
| 76 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Snapdragon® 7 Gen 4 Mobile | 4.64 ms | 2 - 178 MB | NPU
|
| 77 |
+
| YOLOv8-Segmentation | QNN_DLC | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 1.416 ms | 2 - 65 MB | NPU
|
| 78 |
+
| YOLOv8-Segmentation | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 2.941 ms | 0 - 176 MB | NPU
|
| 79 |
+
| YOLOv8-Segmentation | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 16.176 ms | 4 - 107 MB | NPU
|
| 80 |
+
| YOLOv8-Segmentation | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 3.981 ms | 0 - 2 MB | NPU
|
| 81 |
+
| YOLOv8-Segmentation | TFLITE | float | Qualcomm® SA8775P | 23.252 ms | 4 - 108 MB | NPU
|
| 82 |
+
| YOLOv8-Segmentation | TFLITE | float | Qualcomm® QCS9075 | 5.765 ms | 4 - 23 MB | NPU
|
| 83 |
+
| YOLOv8-Segmentation | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 8.896 ms | 4 - 204 MB | NPU
|
| 84 |
+
| YOLOv8-Segmentation | TFLITE | float | Qualcomm® SA7255P | 16.176 ms | 4 - 107 MB | NPU
|
| 85 |
+
| YOLOv8-Segmentation | TFLITE | float | Qualcomm® SA8295P | 8.553 ms | 4 - 174 MB | NPU
|
| 86 |
+
| YOLOv8-Segmentation | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 2.227 ms | 0 - 111 MB | NPU
|
| 87 |
+
| YOLOv8-Segmentation | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 1.772 ms | 0 - 127 MB | NPU
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
## License
|
| 90 |
* The license for the original implementation of YOLOv8-Segmentation can be found
|
| 91 |
[here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
|
| 92 |
|
|
|
|
|
|
|
| 93 |
## References
|
| 94 |
* [Ultralytics YOLOv8 Docs: Instance Segmentation](https://docs.ultralytics.com/tasks/segment/)
|
| 95 |
* [Source Model Implementation](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/segment)
|
| 96 |
|
|
|
|
|
|
|
| 97 |
## Community
|
| 98 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
| 99 |
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|
|
|
|
|
|