File size: 15,497 Bytes
30282d9
 
6524a2d
30282d9
 
b29f1d6
30282d9
 
 
e583681
30282d9
 
 
 
e933be4
30282d9
 
b26b575
e933be4
 
30282d9
 
 
 
 
8f5dfcd
30282d9
 
6524a2d
30282d9
 
 
 
7e86c4d
b1d7b6f
 
30282d9
6524a2d
9bd82e6
327cbec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30282d9
5a5d1e3
 
30282d9
 
 
 
dffad80
30282d9
5bc8405
dffad80
30282d9
 
 
f7451b3
30282d9
f7451b3
30282d9
 
 
 
 
 
 
f7451b3
30282d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f5dfcd
5a5d1e3
 
30282d9
 
5a5d1e3
30282d9
 
 
 
 
 
 
 
 
 
 
 
0e0cf7b
30282d9
 
0e0cf7b
30282d9
 
321f440
30282d9
0e0cf7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30282d9
 
 
 
 
 
 
 
 
 
 
 
201d1f5
 
 
f3d5c48
30282d9
 
 
 
 
 
 
 
 
201d1f5
 
 
 
f3d5c48
30282d9
 
 
 
 
 
f7451b3
30282d9
 
5a5d1e3
560ffc7
 
 
 
 
3b4cd46
560ffc7
 
 
 
 
3b4cd46
560ffc7
 
30282d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9bd82e6
30282d9
dffad80
 
9bd82e6
 
30282d9
 
 
 
 
9bd82e6
 
30282d9
dfab0c3
30282d9
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
---
library_name: pytorch
license: other
tags:
- android
pipeline_tag: image-segmentation

---

![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mediapipe_selfie/web-assets/model_demo.png)

# MediaPipe-Selfie-Segmentation: Optimized for Mobile Deployment
## Segments the person from background in a selfie image and realtime background segmentation in video conferencing


Light-weight model that segments a person from the background in square or landscape selfie and video conference imagery.

This model is an implementation of MediaPipe-Selfie-Segmentation found [here](https://github.com/google/mediapipe/tree/master/mediapipe/modules/selfie_segmentation).


This repository provides scripts to run MediaPipe-Selfie-Segmentation on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/mediapipe_selfie).



### Model Details

- **Model Type:** Model_use_case.semantic_segmentation
- **Model Stats:**
  - Model checkpoint: Square
  - Input resolution (Square): 256x256
  - Input resolution (Landscape): 144x256
  - Number of output classes: 6
  - Number of parameters: 106K
  - Model size (float): 447 KB

| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|---|---|---|---|---|---|---|---|---|
| MediaPipe-Selfie-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 1.769 ms | 0 - 119 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 1.735 ms | 1 - 119 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 0.955 ms | 0 - 135 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 0.95 ms | 1 - 142 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 0.713 ms | 0 - 10 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 0.698 ms | 1 - 3 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 1.076 ms | 0 - 3 MB | NPU | [MediaPipe-Selfie-Segmentation.onnx.zip](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.onnx.zip) |
| MediaPipe-Selfie-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 1.004 ms | 0 - 118 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 0.977 ms | 1 - 119 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 1.769 ms | 0 - 119 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 1.735 ms | 1 - 119 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | TFLITE | 0.715 ms | 0 - 3 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | SA8255 (Proxy) | Qualcomm® SA8255P (Proxy) | QNN_DLC | 0.697 ms | 1 - 4 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 1.346 ms | 0 - 127 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 1.321 ms | 0 - 126 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | TFLITE | 0.712 ms | 0 - 2 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | SA8650 (Proxy) | Qualcomm® SA8650P (Proxy) | QNN_DLC | 0.698 ms | 0 - 2 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 1.004 ms | 0 - 118 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 0.977 ms | 1 - 119 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 0.46 ms | 0 - 137 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 0.457 ms | 0 - 133 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 0.669 ms | 0 - 111 MB | NPU | [MediaPipe-Selfie-Segmentation.onnx.zip](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.onnx.zip) |
| MediaPipe-Selfie-Segmentation | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | TFLITE | 0.363 ms | 0 - 123 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | QNN_DLC | 0.356 ms | 0 - 124 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX | 0.532 ms | 0 - 97 MB | NPU | [MediaPipe-Selfie-Segmentation.onnx.zip](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.onnx.zip) |
| MediaPipe-Selfie-Segmentation | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen5 Mobile | TFLITE | 0.338 ms | 0 - 123 MB | NPU | [MediaPipe-Selfie-Segmentation.tflite](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.tflite) |
| MediaPipe-Selfie-Segmentation | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen5 Mobile | QNN_DLC | 0.326 ms | 0 - 122 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen5 Mobile | ONNX | 0.512 ms | 1 - 98 MB | NPU | [MediaPipe-Selfie-Segmentation.onnx.zip](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.onnx.zip) |
| MediaPipe-Selfie-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 0.859 ms | 1 - 1 MB | NPU | [MediaPipe-Selfie-Segmentation.dlc](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.dlc) |
| MediaPipe-Selfie-Segmentation | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1.016 ms | 2 - 2 MB | NPU | [MediaPipe-Selfie-Segmentation.onnx.zip](https://huggingface.co/qualcomm/MediaPipe-Selfie-Segmentation/blob/main/MediaPipe-Selfie-Segmentation.onnx.zip) |




## Installation


Install the package via pip:
```bash
# NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported.
pip install "qai-hub-models[mediapipe-selfie]"
```


## Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device

Sign-in to [Qualcomm® AI Hub Workbench](https://workbench.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.

With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information.



## Demo off target

The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.

```bash
python -m qai_hub_models.models.mediapipe_selfie.demo
```

The above demo runs a reference implementation of pre-processing, model
inference, and post processing.

**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mediapipe_selfie.demo
```


### Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.

```bash
python -m qai_hub_models.models.mediapipe_selfie.export
```



## How does this work?

This [export script](https://aihub.qualcomm.com/models/mediapipe_selfie/qai_hub_models/models/MediaPipe-Selfie-Segmentation/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:

Step 1: **Compile model for on-device deployment**

To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.

```python
import torch

import qai_hub as hub
from qai_hub_models.models.mediapipe_selfie import Model

# Load the model
torch_model = Model.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S25")

# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()

pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])

# Compile model on a specific device
compile_job = hub.submit_compile_job(
    model=pt_model,
    device=device,
    input_specs=torch_model.get_input_spec(),
)

# Get target model to run on-device
target_model = compile_job.get_target_model()

```


Step 2: **Performance profiling on cloud-hosted device**

After compiling models from step 1. Models can be profiled model on-device using the
`target_model`. Note that this scripts runs the model on a device automatically
provisioned in the cloud.  Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
```python
profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        
```

Step 3: **Verify on-device accuracy**

To verify the accuracy of the model on-device, you can run on-device inference
on sample input data on the same cloud hosted device.
```python
input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

```
With the output of the model, you can compute like PSNR, relative errors or
spot check the output with expected output.

**Note**: This on-device profiling and inference requires access to Qualcomm®
AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).



## Run demo on a cloud-hosted device

You can also run the demo on-device.

```bash
python -m qai_hub_models.models.mediapipe_selfie.demo --eval-mode on-device
```

**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.mediapipe_selfie.demo -- --eval-mode on-device
```


## Deploying compiled model to Android


The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
  tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
  guide to deploy the .tflite model in an Android application.


- QNN (`.so` export ): This [sample
  app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library  in an Android application.


## View on Qualcomm® AI Hub
Get more details on MediaPipe-Selfie-Segmentation's performance across various devices [here](https://aihub.qualcomm.com/models/mediapipe_selfie).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)


## License
* The license for the original implementation of MediaPipe-Selfie-Segmentation can be found
  [here](https://github.com/google/mediapipe/blob/master/LICENSE).



## References
* [Image segmentation guide](https://developers.google.com/mediapipe/solutions/vision/image_segmenter/)
* [Source Model Implementation](https://github.com/google/mediapipe/tree/master/mediapipe/modules/selfie_segmentation)



## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).