| | --- |
| | library_name: pytorch |
| | license: unlicense |
| | tags: |
| | - real_time |
| | - android |
| | pipeline_tag: image-segmentation |
| |
|
| | --- |
| | |
| |  |
| |
|
| | # BiseNet: Optimized for Mobile Deployment |
| | ## Segment images or video by class in real-time on device |
| |
|
| |
|
| | BiSeNet (Bilateral Segmentation Network) is a novel architecture designed for real-time semantic segmentation. It addresses the challenge of balancing spatial resolution and receptive field by employing a Spatial Path to preserve high-resolution features and a context path to capture sufficient receptive field. |
| |
|
| | This model is an implementation of BiseNet found [here](https://github.com/ooooverflow/BiSeNet). |
| |
|
| |
|
| | This repository provides scripts to run BiseNet on Qualcomm® devices. |
| | More details on model performance across various devices, can be found |
| | [here](https://aihub.qualcomm.com/models/bisenet). |
| |
|
| |
|
| | ### Model Details |
| |
|
| | - **Model Type:** Semantic segmentation |
| | - **Model Stats:** |
| | - Model checkpoint: best_dice_loss_miou_0.655.pth |
| | - Inference latency: RealTime |
| | - Input resolution: 720x960 |
| | - Number of parameters: 12.0M |
| | - Model size: 45.7 MB |
| |
|
| | | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model |
| | |---|---|---|---|---|---|---|---|---| |
| | | BiseNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 27.835 ms | 9 - 45 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 27.934 ms | 8 - 21 MB | FP16 | NPU | [BiseNet.so](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.so) | |
| | | BiseNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 30.487 ms | 63 - 132 MB | FP16 | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | |
| | | BiseNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 20.959 ms | 31 - 79 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 20.685 ms | 8 - 36 MB | FP16 | NPU | [BiseNet.so](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.so) | |
| | | BiseNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 23.769 ms | 74 - 116 MB | FP16 | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | |
| | | BiseNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 19.023 ms | 31 - 60 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 19.195 ms | 8 - 35 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 21.234 ms | 68 - 110 MB | FP16 | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | |
| | | BiseNet | SA7255P ADP | SA7255P | TFLITE | 485.045 ms | 31 - 56 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | SA7255P ADP | SA7255P | QNN | 484.5 ms | 0 - 10 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 28.077 ms | 8 - 40 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | SA8255 (Proxy) | SA8255P Proxy | QNN | 26.44 ms | 4 - 6 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | SA8295P ADP | SA8295P | TFLITE | 37.645 ms | 32 - 58 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | SA8295P ADP | SA8295P | QNN | 35.679 ms | 3 - 21 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 27.978 ms | 7 - 47 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | SA8650 (Proxy) | SA8650P Proxy | QNN | 26.48 ms | 3 - 5 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | SA8775P ADP | SA8775P | TFLITE | 41.211 ms | 32 - 57 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | SA8775P ADP | SA8775P | QNN | 39.407 ms | 2 - 12 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 485.045 ms | 31 - 56 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 484.5 ms | 0 - 10 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 27.8 ms | 32 - 59 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 26.501 ms | 8 - 11 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 41.211 ms | 32 - 57 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 39.407 ms | 2 - 12 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 35.686 ms | 32 - 81 MB | FP16 | NPU | [BiseNet.tflite](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.tflite) | |
| | | BiseNet | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 35.601 ms | 8 - 36 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 25.221 ms | 8 - 8 MB | FP16 | NPU | Use Export Script | |
| | | BiseNet | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 29.943 ms | 66 - 66 MB | FP16 | NPU | [BiseNet.onnx](https://huggingface.co/qualcomm/BiseNet/blob/main/BiseNet.onnx) | |
| |
|
| |
|
| |
|
| |
|
| | ## Installation |
| |
|
| |
|
| | Install the package via pip: |
| | ```bash |
| | pip install qai-hub-models |
| | ``` |
| |
|
| |
|
| | ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device |
| |
|
| | Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your |
| | Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`. |
| |
|
| | With this API token, you can configure your client to run models on the cloud |
| | hosted devices. |
| | ```bash |
| | qai-hub configure --api_token API_TOKEN |
| | ``` |
| | Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information. |
| |
|
| |
|
| |
|
| | ## Demo off target |
| |
|
| | The package contains a simple end-to-end demo that downloads pre-trained |
| | weights and runs this model on a sample input. |
| |
|
| | ```bash |
| | python -m qai_hub_models.models.bisenet.demo |
| | ``` |
| |
|
| | The above demo runs a reference implementation of pre-processing, model |
| | inference, and post processing. |
| |
|
| | **NOTE**: If you want running in a Jupyter Notebook or Google Colab like |
| | environment, please add the following to your cell (instead of the above). |
| | ``` |
| | %run -m qai_hub_models.models.bisenet.demo |
| | ``` |
| |
|
| |
|
| | ### Run model on a cloud-hosted device |
| |
|
| | In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® |
| | device. This script does the following: |
| | * Performance check on-device on a cloud-hosted device |
| | * Downloads compiled assets that can be deployed on-device for Android. |
| | * Accuracy check between PyTorch and on-device outputs. |
| |
|
| | ```bash |
| | python -m qai_hub_models.models.bisenet.export |
| | ``` |
| | ``` |
| | Profiling Results |
| | ------------------------------------------------------------ |
| | BiseNet |
| | Device : Samsung Galaxy S23 (13) |
| | Runtime : TFLITE |
| | Estimated inference time (ms) : 27.8 |
| | Estimated peak memory usage (MB): [9, 45] |
| | Total # Ops : 63 |
| | Compute Unit(s) : NPU (63 ops) |
| | ``` |
| |
|
| |
|
| | ## How does this work? |
| |
|
| | This [export script](https://aihub.qualcomm.com/models/bisenet/qai_hub_models/models/BiseNet/export.py) |
| | leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model |
| | on-device. Lets go through each step below in detail: |
| |
|
| | Step 1: **Compile model for on-device deployment** |
| |
|
| | To compile a PyTorch model for on-device deployment, we first trace the model |
| | in memory using the `jit.trace` and then call the `submit_compile_job` API. |
| |
|
| | ```python |
| | import torch |
| | |
| | import qai_hub as hub |
| | from qai_hub_models.models.bisenet import Model |
| | |
| | # Load the model |
| | torch_model = Model.from_pretrained() |
| | |
| | # Device |
| | device = hub.Device("Samsung Galaxy S24") |
| | |
| | # Trace model |
| | input_shape = torch_model.get_input_spec() |
| | sample_inputs = torch_model.sample_inputs() |
| | |
| | pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()]) |
| | |
| | # Compile model on a specific device |
| | compile_job = hub.submit_compile_job( |
| | model=pt_model, |
| | device=device, |
| | input_specs=torch_model.get_input_spec(), |
| | ) |
| | |
| | # Get target model to run on-device |
| | target_model = compile_job.get_target_model() |
| | |
| | ``` |
| |
|
| |
|
| | Step 2: **Performance profiling on cloud-hosted device** |
| |
|
| | After compiling models from step 1. Models can be profiled model on-device using the |
| | `target_model`. Note that this scripts runs the model on a device automatically |
| | provisioned in the cloud. Once the job is submitted, you can navigate to a |
| | provided job URL to view a variety of on-device performance metrics. |
| | ```python |
| | profile_job = hub.submit_profile_job( |
| | model=target_model, |
| | device=device, |
| | ) |
| | |
| | ``` |
| |
|
| | Step 3: **Verify on-device accuracy** |
| |
|
| | To verify the accuracy of the model on-device, you can run on-device inference |
| | on sample input data on the same cloud hosted device. |
| | ```python |
| | input_data = torch_model.sample_inputs() |
| | inference_job = hub.submit_inference_job( |
| | model=target_model, |
| | device=device, |
| | inputs=input_data, |
| | ) |
| | on_device_output = inference_job.download_output_data() |
| | |
| | ``` |
| | With the output of the model, you can compute like PSNR, relative errors or |
| | spot check the output with expected output. |
| |
|
| | **Note**: This on-device profiling and inference requires access to Qualcomm® |
| | AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup). |
| |
|
| |
|
| |
|
| | ## Run demo on a cloud-hosted device |
| |
|
| | You can also run the demo on-device. |
| |
|
| | ```bash |
| | python -m qai_hub_models.models.bisenet.demo --on-device |
| | ``` |
| |
|
| | **NOTE**: If you want running in a Jupyter Notebook or Google Colab like |
| | environment, please add the following to your cell (instead of the above). |
| | ``` |
| | %run -m qai_hub_models.models.bisenet.demo -- --on-device |
| | ``` |
| |
|
| |
|
| | ## Deploying compiled model to Android |
| |
|
| |
|
| | The models can be deployed using multiple runtimes: |
| | - TensorFlow Lite (`.tflite` export): [This |
| | tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a |
| | guide to deploy the .tflite model in an Android application. |
| |
|
| |
|
| | - QNN (`.so` export ): This [sample |
| | app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html) |
| | provides instructions on how to use the `.so` shared library in an Android application. |
| | |
| | |
| | ## View on Qualcomm® AI Hub |
| | Get more details on BiseNet's performance across various devices [here](https://aihub.qualcomm.com/models/bisenet). |
| | Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/) |
| | |
| | |
| | ## License |
| | * The license for the original implementation of BiseNet can be found |
| | [here](This model's original implementation does not provide a LICENSE.). |
| | * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf) |
| | |
| | |
| | |
| | ## References |
| | * [BiSeNet Bilateral Segmentation Network for Real-time Semantic Segmentation](https://arxiv.org/abs/1808.00897) |
| | * [Source Model Implementation](https://github.com/ooooverflow/BiSeNet) |
| | |
| | |
| | |
| | ## Community |
| | * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. |
| | * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com). |
| | |
| | |
| | |